Nov 12 17:40:45.315818 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 12 17:40:45.315878 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Tue Nov 12 16:24:35 -00 2024 Nov 12 17:40:45.315906 kernel: KASLR disabled due to lack of seed Nov 12 17:40:45.315924 kernel: efi: EFI v2.7 by EDK II Nov 12 17:40:45.315941 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b003a98 MEMRESERVE=0x7852ee18 Nov 12 17:40:45.315957 kernel: ACPI: Early table checksum verification disabled Nov 12 17:40:45.315976 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 12 17:40:45.315992 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 12 17:40:45.316009 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 12 17:40:45.316025 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 12 17:40:45.316046 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 12 17:40:45.316063 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 12 17:40:45.316079 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 12 17:40:45.316096 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 12 17:40:45.316116 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 12 17:40:45.316137 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 12 17:40:45.316155 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 12 17:40:45.316173 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 12 17:40:45.316190 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 12 17:40:45.316207 kernel: printk: bootconsole [uart0] enabled Nov 12 17:40:45.316224 kernel: NUMA: Failed to initialise from firmware Nov 12 17:40:45.316242 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 12 17:40:45.316259 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Nov 12 17:40:45.316296 kernel: Zone ranges: Nov 12 17:40:45.316317 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 12 17:40:45.316335 kernel: DMA32 empty Nov 12 17:40:45.316359 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 12 17:40:45.316376 kernel: Movable zone start for each node Nov 12 17:40:45.316393 kernel: Early memory node ranges Nov 12 17:40:45.316410 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 12 17:40:45.316428 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 12 17:40:45.316445 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 12 17:40:45.316463 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 12 17:40:45.316481 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 12 17:40:45.316498 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 12 17:40:45.316515 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 12 17:40:45.316532 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 12 17:40:45.316549 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 12 17:40:45.316607 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 12 17:40:45.316627 kernel: psci: probing for conduit method from ACPI. Nov 12 17:40:45.316655 kernel: psci: PSCIv1.0 detected in firmware. Nov 12 17:40:45.316674 kernel: psci: Using standard PSCI v0.2 function IDs Nov 12 17:40:45.316694 kernel: psci: Trusted OS migration not required Nov 12 17:40:45.316720 kernel: psci: SMC Calling Convention v1.1 Nov 12 17:40:45.316739 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Nov 12 17:40:45.316758 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Nov 12 17:40:45.316778 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 12 17:40:45.316796 kernel: Detected PIPT I-cache on CPU0 Nov 12 17:40:45.316815 kernel: CPU features: detected: GIC system register CPU interface Nov 12 17:40:45.316834 kernel: CPU features: detected: Spectre-v2 Nov 12 17:40:45.316852 kernel: CPU features: detected: Spectre-v3a Nov 12 17:40:45.316871 kernel: CPU features: detected: Spectre-BHB Nov 12 17:40:45.316889 kernel: CPU features: detected: ARM erratum 1742098 Nov 12 17:40:45.316909 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 12 17:40:45.316938 kernel: alternatives: applying boot alternatives Nov 12 17:40:45.316961 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:40:45.316981 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 17:40:45.317001 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 17:40:45.317021 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 12 17:40:45.317039 kernel: Fallback order for Node 0: 0 Nov 12 17:40:45.317059 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Nov 12 17:40:45.317078 kernel: Policy zone: Normal Nov 12 17:40:45.317097 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 17:40:45.317116 kernel: software IO TLB: area num 2. Nov 12 17:40:45.317135 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Nov 12 17:40:45.317167 kernel: Memory: 3820216K/4030464K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 210248K reserved, 0K cma-reserved) Nov 12 17:40:45.317187 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 17:40:45.317205 kernel: trace event string verifier disabled Nov 12 17:40:45.317225 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 17:40:45.317248 kernel: rcu: RCU event tracing is enabled. Nov 12 17:40:45.317269 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 17:40:45.317287 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 17:40:45.317307 kernel: Tracing variant of Tasks RCU enabled. Nov 12 17:40:45.317327 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 17:40:45.317347 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 17:40:45.317366 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 12 17:40:45.317397 kernel: GICv3: 96 SPIs implemented Nov 12 17:40:45.317417 kernel: GICv3: 0 Extended SPIs implemented Nov 12 17:40:45.317437 kernel: Root IRQ handler: gic_handle_irq Nov 12 17:40:45.317456 kernel: GICv3: GICv3 features: 16 PPIs Nov 12 17:40:45.317476 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 12 17:40:45.317495 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 12 17:40:45.317515 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Nov 12 17:40:45.317535 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Nov 12 17:40:45.321462 kernel: GICv3: using LPI property table @0x00000004000d0000 Nov 12 17:40:45.321594 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 12 17:40:45.321633 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Nov 12 17:40:45.321657 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 17:40:45.321694 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 12 17:40:45.321717 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 12 17:40:45.321739 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 12 17:40:45.321761 kernel: Console: colour dummy device 80x25 Nov 12 17:40:45.321782 kernel: printk: console [tty1] enabled Nov 12 17:40:45.321804 kernel: ACPI: Core revision 20230628 Nov 12 17:40:45.321823 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 12 17:40:45.321842 kernel: pid_max: default: 32768 minimum: 301 Nov 12 17:40:45.321861 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 17:40:45.321881 kernel: landlock: Up and running. Nov 12 17:40:45.321905 kernel: SELinux: Initializing. Nov 12 17:40:45.321923 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:40:45.321943 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 12 17:40:45.321962 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 17:40:45.321980 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 17:40:45.321999 kernel: rcu: Hierarchical SRCU implementation. Nov 12 17:40:45.322018 kernel: rcu: Max phase no-delay instances is 400. Nov 12 17:40:45.322037 kernel: Platform MSI: ITS@0x10080000 domain created Nov 12 17:40:45.322060 kernel: PCI/MSI: ITS@0x10080000 domain created Nov 12 17:40:45.322078 kernel: Remapping and enabling EFI services. Nov 12 17:40:45.322097 kernel: smp: Bringing up secondary CPUs ... Nov 12 17:40:45.322115 kernel: Detected PIPT I-cache on CPU1 Nov 12 17:40:45.322133 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 12 17:40:45.322153 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Nov 12 17:40:45.322171 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 12 17:40:45.322189 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 17:40:45.322208 kernel: SMP: Total of 2 processors activated. Nov 12 17:40:45.322226 kernel: CPU features: detected: 32-bit EL0 Support Nov 12 17:40:45.322249 kernel: CPU features: detected: 32-bit EL1 Support Nov 12 17:40:45.322272 kernel: CPU features: detected: CRC32 instructions Nov 12 17:40:45.322304 kernel: CPU: All CPU(s) started at EL1 Nov 12 17:40:45.322327 kernel: alternatives: applying system-wide alternatives Nov 12 17:40:45.322346 kernel: devtmpfs: initialized Nov 12 17:40:45.322365 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 17:40:45.322384 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 17:40:45.322403 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 17:40:45.322422 kernel: SMBIOS 3.0.0 present. Nov 12 17:40:45.322445 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 12 17:40:45.322465 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 17:40:45.322484 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 12 17:40:45.322504 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 12 17:40:45.322524 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 12 17:40:45.322545 kernel: audit: initializing netlink subsys (disabled) Nov 12 17:40:45.322611 kernel: audit: type=2000 audit(0.326:1): state=initialized audit_enabled=0 res=1 Nov 12 17:40:45.322649 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 17:40:45.322670 kernel: cpuidle: using governor menu Nov 12 17:40:45.322691 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 12 17:40:45.322710 kernel: ASID allocator initialised with 65536 entries Nov 12 17:40:45.322729 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 17:40:45.322749 kernel: Serial: AMBA PL011 UART driver Nov 12 17:40:45.322768 kernel: Modules: 17520 pages in range for non-PLT usage Nov 12 17:40:45.322788 kernel: Modules: 509040 pages in range for PLT usage Nov 12 17:40:45.322809 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 17:40:45.322834 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 17:40:45.322854 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 12 17:40:45.322873 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 12 17:40:45.322892 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 17:40:45.322912 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 17:40:45.322931 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 12 17:40:45.322951 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 12 17:40:45.322974 kernel: ACPI: Added _OSI(Module Device) Nov 12 17:40:45.322996 kernel: ACPI: Added _OSI(Processor Device) Nov 12 17:40:45.323025 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 17:40:45.323045 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 17:40:45.323066 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 17:40:45.323086 kernel: ACPI: Interpreter enabled Nov 12 17:40:45.323106 kernel: ACPI: Using GIC for interrupt routing Nov 12 17:40:45.323127 kernel: ACPI: MCFG table detected, 1 entries Nov 12 17:40:45.323148 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 12 17:40:45.325749 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 12 17:40:45.326066 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 12 17:40:45.326303 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 12 17:40:45.326534 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 12 17:40:45.326792 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 12 17:40:45.326823 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 12 17:40:45.326843 kernel: acpiphp: Slot [1] registered Nov 12 17:40:45.326864 kernel: acpiphp: Slot [2] registered Nov 12 17:40:45.326884 kernel: acpiphp: Slot [3] registered Nov 12 17:40:45.326920 kernel: acpiphp: Slot [4] registered Nov 12 17:40:45.326940 kernel: acpiphp: Slot [5] registered Nov 12 17:40:45.326959 kernel: acpiphp: Slot [6] registered Nov 12 17:40:45.326980 kernel: acpiphp: Slot [7] registered Nov 12 17:40:45.327000 kernel: acpiphp: Slot [8] registered Nov 12 17:40:45.327020 kernel: acpiphp: Slot [9] registered Nov 12 17:40:45.327041 kernel: acpiphp: Slot [10] registered Nov 12 17:40:45.327061 kernel: acpiphp: Slot [11] registered Nov 12 17:40:45.327082 kernel: acpiphp: Slot [12] registered Nov 12 17:40:45.327102 kernel: acpiphp: Slot [13] registered Nov 12 17:40:45.327132 kernel: acpiphp: Slot [14] registered Nov 12 17:40:45.327151 kernel: acpiphp: Slot [15] registered Nov 12 17:40:45.327172 kernel: acpiphp: Slot [16] registered Nov 12 17:40:45.327192 kernel: acpiphp: Slot [17] registered Nov 12 17:40:45.327212 kernel: acpiphp: Slot [18] registered Nov 12 17:40:45.327233 kernel: acpiphp: Slot [19] registered Nov 12 17:40:45.327253 kernel: acpiphp: Slot [20] registered Nov 12 17:40:45.327273 kernel: acpiphp: Slot [21] registered Nov 12 17:40:45.327293 kernel: acpiphp: Slot [22] registered Nov 12 17:40:45.327322 kernel: acpiphp: Slot [23] registered Nov 12 17:40:45.327344 kernel: acpiphp: Slot [24] registered Nov 12 17:40:45.327366 kernel: acpiphp: Slot [25] registered Nov 12 17:40:45.327387 kernel: acpiphp: Slot [26] registered Nov 12 17:40:45.327408 kernel: acpiphp: Slot [27] registered Nov 12 17:40:45.327428 kernel: acpiphp: Slot [28] registered Nov 12 17:40:45.327449 kernel: acpiphp: Slot [29] registered Nov 12 17:40:45.327469 kernel: acpiphp: Slot [30] registered Nov 12 17:40:45.327490 kernel: acpiphp: Slot [31] registered Nov 12 17:40:45.327510 kernel: PCI host bridge to bus 0000:00 Nov 12 17:40:45.330246 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 12 17:40:45.330474 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 12 17:40:45.330733 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 12 17:40:45.330937 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 12 17:40:45.331283 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Nov 12 17:40:45.332788 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Nov 12 17:40:45.333131 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Nov 12 17:40:45.333497 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Nov 12 17:40:45.334849 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Nov 12 17:40:45.335076 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 17:40:45.335337 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Nov 12 17:40:45.339484 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Nov 12 17:40:45.339795 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Nov 12 17:40:45.340087 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Nov 12 17:40:45.340392 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 12 17:40:45.340744 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Nov 12 17:40:45.343645 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Nov 12 17:40:45.344002 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Nov 12 17:40:45.344329 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Nov 12 17:40:45.344691 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Nov 12 17:40:45.344930 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 12 17:40:45.345122 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 12 17:40:45.345316 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 12 17:40:45.345343 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 12 17:40:45.345364 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 12 17:40:45.345384 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 12 17:40:45.345403 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 12 17:40:45.345422 kernel: iommu: Default domain type: Translated Nov 12 17:40:45.345451 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 12 17:40:45.345471 kernel: efivars: Registered efivars operations Nov 12 17:40:45.345490 kernel: vgaarb: loaded Nov 12 17:40:45.345510 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 12 17:40:45.345529 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 17:40:45.345549 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 17:40:45.349313 kernel: pnp: PnP ACPI init Nov 12 17:40:45.349796 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 12 17:40:45.349866 kernel: pnp: PnP ACPI: found 1 devices Nov 12 17:40:45.349890 kernel: NET: Registered PF_INET protocol family Nov 12 17:40:45.349912 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 17:40:45.349933 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 12 17:40:45.349955 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 17:40:45.349976 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 12 17:40:45.349998 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 12 17:40:45.350019 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 12 17:40:45.350039 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:40:45.350071 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 12 17:40:45.350092 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 17:40:45.350112 kernel: PCI: CLS 0 bytes, default 64 Nov 12 17:40:45.350132 kernel: kvm [1]: HYP mode not available Nov 12 17:40:45.350151 kernel: Initialise system trusted keyrings Nov 12 17:40:45.350173 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 12 17:40:45.350194 kernel: Key type asymmetric registered Nov 12 17:40:45.350214 kernel: Asymmetric key parser 'x509' registered Nov 12 17:40:45.350234 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 12 17:40:45.350266 kernel: io scheduler mq-deadline registered Nov 12 17:40:45.350288 kernel: io scheduler kyber registered Nov 12 17:40:45.350308 kernel: io scheduler bfq registered Nov 12 17:40:45.351959 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 12 17:40:45.352008 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 12 17:40:45.352031 kernel: ACPI: button: Power Button [PWRB] Nov 12 17:40:45.352052 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 12 17:40:45.352072 kernel: ACPI: button: Sleep Button [SLPB] Nov 12 17:40:45.352110 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 17:40:45.352131 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 12 17:40:45.352923 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 12 17:40:45.352964 kernel: printk: console [ttyS0] disabled Nov 12 17:40:45.352985 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 12 17:40:45.353005 kernel: printk: console [ttyS0] enabled Nov 12 17:40:45.353024 kernel: printk: bootconsole [uart0] disabled Nov 12 17:40:45.353044 kernel: thunder_xcv, ver 1.0 Nov 12 17:40:45.353063 kernel: thunder_bgx, ver 1.0 Nov 12 17:40:45.353082 kernel: nicpf, ver 1.0 Nov 12 17:40:45.353113 kernel: nicvf, ver 1.0 Nov 12 17:40:45.353349 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 12 17:40:45.353641 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-11-12T17:40:44 UTC (1731433244) Nov 12 17:40:45.353684 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 17:40:45.353705 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Nov 12 17:40:45.353726 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 12 17:40:45.353747 kernel: watchdog: Hard watchdog permanently disabled Nov 12 17:40:45.353786 kernel: NET: Registered PF_INET6 protocol family Nov 12 17:40:45.353807 kernel: Segment Routing with IPv6 Nov 12 17:40:45.353827 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 17:40:45.353847 kernel: NET: Registered PF_PACKET protocol family Nov 12 17:40:45.353868 kernel: Key type dns_resolver registered Nov 12 17:40:45.353889 kernel: registered taskstats version 1 Nov 12 17:40:45.353910 kernel: Loading compiled-in X.509 certificates Nov 12 17:40:45.353930 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 277bea35d8d47c9841f307ab609d4271c3622dcb' Nov 12 17:40:45.353951 kernel: Key type .fscrypt registered Nov 12 17:40:45.353970 kernel: Key type fscrypt-provisioning registered Nov 12 17:40:45.353999 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 17:40:45.354019 kernel: ima: Allocated hash algorithm: sha1 Nov 12 17:40:45.354039 kernel: ima: No architecture policies found Nov 12 17:40:45.354058 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 12 17:40:45.354078 kernel: clk: Disabling unused clocks Nov 12 17:40:45.354098 kernel: Freeing unused kernel memory: 39360K Nov 12 17:40:45.354118 kernel: Run /init as init process Nov 12 17:40:45.354139 kernel: with arguments: Nov 12 17:40:45.354159 kernel: /init Nov 12 17:40:45.354187 kernel: with environment: Nov 12 17:40:45.354207 kernel: HOME=/ Nov 12 17:40:45.354227 kernel: TERM=linux Nov 12 17:40:45.354247 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 17:40:45.354275 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:40:45.354301 systemd[1]: Detected virtualization amazon. Nov 12 17:40:45.354325 systemd[1]: Detected architecture arm64. Nov 12 17:40:45.354354 systemd[1]: Running in initrd. Nov 12 17:40:45.354377 systemd[1]: No hostname configured, using default hostname. Nov 12 17:40:45.354400 systemd[1]: Hostname set to . Nov 12 17:40:45.354424 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:40:45.354446 systemd[1]: Queued start job for default target initrd.target. Nov 12 17:40:45.354469 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:40:45.354492 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:40:45.354516 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 17:40:45.354547 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:40:45.354601 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 17:40:45.354625 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 17:40:45.354649 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 17:40:45.354671 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 17:40:45.354692 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:40:45.354713 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:40:45.354741 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:40:45.354762 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:40:45.354783 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:40:45.354803 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:40:45.354824 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:40:45.354845 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:40:45.354866 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 17:40:45.354887 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 17:40:45.354908 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:40:45.354934 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:40:45.354955 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:40:45.354976 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:40:45.354997 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 17:40:45.355018 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:40:45.355119 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 17:40:45.356050 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 17:40:45.356399 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:40:45.357149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:40:45.357210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:45.357234 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 17:40:45.357256 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:40:45.357278 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 17:40:45.357361 systemd-journald[251]: Collecting audit messages is disabled. Nov 12 17:40:45.357417 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:40:45.357440 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 17:40:45.357462 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:40:45.357488 kernel: Bridge firewalling registered Nov 12 17:40:45.357509 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:40:45.357531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:45.357590 systemd-journald[251]: Journal started Nov 12 17:40:45.357636 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2530122e05ef1e9555f523d2fb5d19) is 8.0M, max 75.3M, 67.3M free. Nov 12 17:40:45.291512 systemd-modules-load[252]: Inserted module 'overlay' Nov 12 17:40:45.344663 systemd-modules-load[252]: Inserted module 'br_netfilter' Nov 12 17:40:45.370678 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:40:45.378450 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:40:45.389862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:40:45.389932 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:40:45.396307 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:40:45.438133 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:45.442413 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:40:45.464131 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:40:45.468656 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:40:45.483914 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:45.497189 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 17:40:45.535470 dracut-cmdline[290]: dracut-dracut-053 Nov 12 17:40:45.544652 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c276c03cfeb31103ba0b5f1af613bdc698463ad3d29e6750e34154929bf187e Nov 12 17:40:45.549524 systemd-resolved[285]: Positive Trust Anchors: Nov 12 17:40:45.549545 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:40:45.549638 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:40:45.750599 kernel: SCSI subsystem initialized Nov 12 17:40:45.758599 kernel: Loading iSCSI transport class v2.0-870. Nov 12 17:40:45.770600 kernel: iscsi: registered transport (tcp) Nov 12 17:40:45.793374 kernel: iscsi: registered transport (qla4xxx) Nov 12 17:40:45.793457 kernel: QLogic iSCSI HBA Driver Nov 12 17:40:45.818022 kernel: random: crng init done Nov 12 17:40:45.818006 systemd-resolved[285]: Defaulting to hostname 'linux'. Nov 12 17:40:45.821714 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:40:45.824241 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:40:45.899109 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 17:40:45.909933 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 17:40:45.969371 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 17:40:45.969639 kernel: device-mapper: uevent: version 1.0.3 Nov 12 17:40:45.972182 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 17:40:46.045688 kernel: raid6: neonx8 gen() 6520 MB/s Nov 12 17:40:46.062624 kernel: raid6: neonx4 gen() 6317 MB/s Nov 12 17:40:46.079682 kernel: raid6: neonx2 gen() 5314 MB/s Nov 12 17:40:46.096621 kernel: raid6: neonx1 gen() 3892 MB/s Nov 12 17:40:46.113631 kernel: raid6: int64x8 gen() 3763 MB/s Nov 12 17:40:46.130613 kernel: raid6: int64x4 gen() 3650 MB/s Nov 12 17:40:46.147633 kernel: raid6: int64x2 gen() 3549 MB/s Nov 12 17:40:46.165667 kernel: raid6: int64x1 gen() 2743 MB/s Nov 12 17:40:46.165745 kernel: raid6: using algorithm neonx8 gen() 6520 MB/s Nov 12 17:40:46.184696 kernel: raid6: .... xor() 4799 MB/s, rmw enabled Nov 12 17:40:46.184775 kernel: raid6: using neon recovery algorithm Nov 12 17:40:46.194087 kernel: xor: measuring software checksum speed Nov 12 17:40:46.194162 kernel: 8regs : 10968 MB/sec Nov 12 17:40:46.195254 kernel: 32regs : 11981 MB/sec Nov 12 17:40:46.197625 kernel: arm64_neon : 8669 MB/sec Nov 12 17:40:46.197677 kernel: xor: using function: 32regs (11981 MB/sec) Nov 12 17:40:46.285606 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 17:40:46.307159 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:40:46.316902 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:40:46.362220 systemd-udevd[471]: Using default interface naming scheme 'v255'. Nov 12 17:40:46.373358 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:40:46.391182 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 17:40:46.433231 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Nov 12 17:40:46.505723 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:40:46.516903 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:40:46.657767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:40:46.678101 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 17:40:46.719278 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 17:40:46.731631 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:40:46.749645 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:40:46.756153 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:40:46.776044 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 17:40:46.821527 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:40:46.906643 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 12 17:40:46.917653 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 12 17:40:46.956473 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 12 17:40:46.956917 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 12 17:40:46.957331 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:b0:6f:d8:75:31 Nov 12 17:40:46.957734 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 12 17:40:46.921719 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:40:46.921945 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:46.924888 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:40:46.927177 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:40:46.927459 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:46.930977 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:46.974683 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 12 17:40:46.942514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:46.991645 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 12 17:40:46.998918 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:47.015196 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 12 17:40:47.015239 kernel: GPT:9289727 != 16777215 Nov 12 17:40:47.015267 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 12 17:40:47.015293 kernel: GPT:9289727 != 16777215 Nov 12 17:40:47.015318 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 12 17:40:47.015355 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:47.006446 (udev-worker)[516]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:40:47.021090 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 17:40:47.082947 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:47.104611 kernel: BTRFS: device fsid 93a9d474-e751-47b7-a65f-e39ca9abd47a devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (526) Nov 12 17:40:47.138631 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (541) Nov 12 17:40:47.216732 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 12 17:40:47.294523 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 17:40:47.311379 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 12 17:40:47.324782 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Nov 12 17:40:47.327366 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 12 17:40:47.352172 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 17:40:47.363835 disk-uuid[660]: Primary Header is updated. Nov 12 17:40:47.363835 disk-uuid[660]: Secondary Entries is updated. Nov 12 17:40:47.363835 disk-uuid[660]: Secondary Header is updated. Nov 12 17:40:47.375627 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:47.384726 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:47.393812 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:48.396662 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 12 17:40:48.398323 disk-uuid[661]: The operation has completed successfully. Nov 12 17:40:48.588352 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 17:40:48.588647 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 17:40:48.641918 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 17:40:48.663514 sh[1004]: Success Nov 12 17:40:48.684975 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 12 17:40:48.808654 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 17:40:48.814776 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 17:40:48.821702 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 17:40:48.872618 kernel: BTRFS info (device dm-0): first mount of filesystem 93a9d474-e751-47b7-a65f-e39ca9abd47a Nov 12 17:40:48.872717 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:48.872746 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 17:40:48.874887 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 17:40:48.874960 kernel: BTRFS info (device dm-0): using free space tree Nov 12 17:40:48.890610 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 12 17:40:48.895638 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 17:40:48.899851 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 17:40:48.912888 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 17:40:48.919617 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 17:40:48.956600 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:48.956699 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:48.956731 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:40:48.966624 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:40:48.991027 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 17:40:48.992892 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:49.005474 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 17:40:49.017220 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 17:40:49.165834 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:40:49.178945 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:40:49.184281 ignition[1103]: Ignition 2.19.0 Nov 12 17:40:49.186299 ignition[1103]: Stage: fetch-offline Nov 12 17:40:49.187372 ignition[1103]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:49.187399 ignition[1103]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:49.194380 ignition[1103]: Ignition finished successfully Nov 12 17:40:49.199597 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:40:49.271635 systemd-networkd[1203]: lo: Link UP Nov 12 17:40:49.272161 systemd-networkd[1203]: lo: Gained carrier Nov 12 17:40:49.276489 systemd-networkd[1203]: Enumeration completed Nov 12 17:40:49.277898 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:49.277906 systemd-networkd[1203]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:40:49.278430 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:40:49.290431 systemd[1]: Reached target network.target - Network. Nov 12 17:40:49.294428 systemd-networkd[1203]: eth0: Link UP Nov 12 17:40:49.294437 systemd-networkd[1203]: eth0: Gained carrier Nov 12 17:40:49.294461 systemd-networkd[1203]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:49.327406 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 17:40:49.331271 systemd-networkd[1203]: eth0: DHCPv4 address 172.31.24.62/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 17:40:49.376905 ignition[1206]: Ignition 2.19.0 Nov 12 17:40:49.377486 ignition[1206]: Stage: fetch Nov 12 17:40:49.378468 ignition[1206]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:49.378496 ignition[1206]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:49.378709 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:49.391918 ignition[1206]: PUT result: OK Nov 12 17:40:49.396019 ignition[1206]: parsed url from cmdline: "" Nov 12 17:40:49.396048 ignition[1206]: no config URL provided Nov 12 17:40:49.396070 ignition[1206]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 17:40:49.396107 ignition[1206]: no config at "/usr/lib/ignition/user.ign" Nov 12 17:40:49.396155 ignition[1206]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:49.405073 ignition[1206]: PUT result: OK Nov 12 17:40:49.405264 ignition[1206]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 12 17:40:49.407040 ignition[1206]: GET result: OK Nov 12 17:40:49.407268 ignition[1206]: parsing config with SHA512: 43771d373c998af656d6c4138270d4b5c1b062c94d4186a503f67789dfa88a22d87e19e50ba7b5d72cc1def9852b6e85149e38182e4d26486d62221482e74ccc Nov 12 17:40:49.422427 unknown[1206]: fetched base config from "system" Nov 12 17:40:49.424512 unknown[1206]: fetched base config from "system" Nov 12 17:40:49.426605 unknown[1206]: fetched user config from "aws" Nov 12 17:40:49.430120 ignition[1206]: fetch: fetch complete Nov 12 17:40:49.430375 ignition[1206]: fetch: fetch passed Nov 12 17:40:49.430488 ignition[1206]: Ignition finished successfully Nov 12 17:40:49.439717 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 17:40:49.453032 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 17:40:49.500878 ignition[1214]: Ignition 2.19.0 Nov 12 17:40:49.500943 ignition[1214]: Stage: kargs Nov 12 17:40:49.502499 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:49.502535 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:49.503419 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:49.507900 ignition[1214]: PUT result: OK Nov 12 17:40:49.518442 ignition[1214]: kargs: kargs passed Nov 12 17:40:49.519023 ignition[1214]: Ignition finished successfully Nov 12 17:40:49.526286 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 17:40:49.548801 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 17:40:49.578371 ignition[1220]: Ignition 2.19.0 Nov 12 17:40:49.578422 ignition[1220]: Stage: disks Nov 12 17:40:49.580639 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:49.580685 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:49.581345 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:49.584807 ignition[1220]: PUT result: OK Nov 12 17:40:49.595462 ignition[1220]: disks: disks passed Nov 12 17:40:49.595966 ignition[1220]: Ignition finished successfully Nov 12 17:40:49.602743 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 17:40:49.608498 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 17:40:49.612085 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 17:40:49.620314 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:40:49.622955 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:40:49.626462 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:40:49.649713 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 17:40:49.704405 systemd-fsck[1228]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 12 17:40:49.711633 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 17:40:49.722748 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 17:40:49.834774 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b3af0fd7-3c7c-4cdc-9b88-dae3d10ea922 r/w with ordered data mode. Quota mode: none. Nov 12 17:40:49.835754 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 17:40:49.841773 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 17:40:49.867791 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:40:49.875020 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 17:40:49.877629 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 12 17:40:49.877717 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 17:40:49.877767 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:40:49.910634 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1247) Nov 12 17:40:49.914837 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:49.914933 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:49.917632 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:40:49.921079 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 17:40:49.929956 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 17:40:49.937597 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:40:49.948131 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:40:50.043874 initrd-setup-root[1271]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 17:40:50.054648 initrd-setup-root[1278]: cut: /sysroot/etc/group: No such file or directory Nov 12 17:40:50.066062 initrd-setup-root[1285]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 17:40:50.076679 initrd-setup-root[1292]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 17:40:50.254827 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 17:40:50.272967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 17:40:50.280525 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 17:40:50.302460 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 17:40:50.304945 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:50.362809 ignition[1360]: INFO : Ignition 2.19.0 Nov 12 17:40:50.362809 ignition[1360]: INFO : Stage: mount Nov 12 17:40:50.366851 ignition[1360]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:50.366851 ignition[1360]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:50.366851 ignition[1360]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:50.372393 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 17:40:50.379148 ignition[1360]: INFO : PUT result: OK Nov 12 17:40:50.386888 ignition[1360]: INFO : mount: mount passed Nov 12 17:40:50.388799 ignition[1360]: INFO : Ignition finished successfully Nov 12 17:40:50.394788 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 17:40:50.404762 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 17:40:50.444035 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 17:40:50.484854 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1371) Nov 12 17:40:50.484950 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 936a2172-6c61-4af6-a047-e38e0a3ff18b Nov 12 17:40:50.488298 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 12 17:40:50.488448 kernel: BTRFS info (device nvme0n1p6): using free space tree Nov 12 17:40:50.495652 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 12 17:40:50.500765 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 17:40:50.555696 ignition[1388]: INFO : Ignition 2.19.0 Nov 12 17:40:50.559085 ignition[1388]: INFO : Stage: files Nov 12 17:40:50.559085 ignition[1388]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:50.559085 ignition[1388]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:50.559085 ignition[1388]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:50.567679 ignition[1388]: INFO : PUT result: OK Nov 12 17:40:50.573540 ignition[1388]: DEBUG : files: compiled without relabeling support, skipping Nov 12 17:40:50.577078 ignition[1388]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 17:40:50.577078 ignition[1388]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 17:40:50.586405 ignition[1388]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 17:40:50.589524 ignition[1388]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 17:40:50.592968 unknown[1388]: wrote ssh authorized keys file for user: core Nov 12 17:40:50.597588 ignition[1388]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 17:40:50.601102 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:40:50.606697 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Nov 12 17:40:50.688969 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 17:40:50.855090 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Nov 12 17:40:50.855090 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 17:40:50.862892 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 17:40:50.862892 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:40:50.862892 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 17:40:50.862892 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:40:50.862892 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 17:40:50.862892 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:40:50.862892 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 17:40:50.887635 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:40:50.887635 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 17:40:50.887635 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:40:50.887635 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:40:50.887635 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:40:50.887635 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Nov 12 17:40:51.222785 systemd-networkd[1203]: eth0: Gained IPv6LL Nov 12 17:40:51.377638 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 17:40:51.761198 ignition[1388]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Nov 12 17:40:51.761198 ignition[1388]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 17:40:51.770010 ignition[1388]: INFO : files: files passed Nov 12 17:40:51.770010 ignition[1388]: INFO : Ignition finished successfully Nov 12 17:40:51.797241 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 17:40:51.809878 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 17:40:51.816483 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 17:40:51.831366 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 17:40:51.833467 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 17:40:51.863535 initrd-setup-root-after-ignition[1417]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:40:51.863535 initrd-setup-root-after-ignition[1417]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:40:51.871043 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 17:40:51.878754 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:40:51.885114 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 17:40:51.896944 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 17:40:51.978142 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 17:40:51.980300 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 17:40:51.985287 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 17:40:51.989505 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 17:40:51.992111 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 17:40:52.018014 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 17:40:52.046857 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:40:52.060880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 17:40:52.097160 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:40:52.099537 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:40:52.104084 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 17:40:52.106443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 17:40:52.107143 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 17:40:52.113305 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 17:40:52.115184 systemd[1]: Stopped target basic.target - Basic System. Nov 12 17:40:52.121265 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 17:40:52.123574 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 17:40:52.126385 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 17:40:52.131195 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 17:40:52.134339 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 17:40:52.139622 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 17:40:52.144721 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 17:40:52.148985 systemd[1]: Stopped target swap.target - Swaps. Nov 12 17:40:52.163356 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 17:40:52.164458 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 17:40:52.168769 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:40:52.169018 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:40:52.169252 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 17:40:52.172925 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:40:52.175378 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 17:40:52.177515 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 17:40:52.186234 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 17:40:52.186631 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 17:40:52.193175 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 17:40:52.193385 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 17:40:52.216821 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 17:40:52.227520 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 17:40:52.229361 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 17:40:52.229684 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:40:52.236357 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 17:40:52.236634 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 17:40:52.265330 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 17:40:52.265621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 17:40:52.282646 ignition[1441]: INFO : Ignition 2.19.0 Nov 12 17:40:52.282646 ignition[1441]: INFO : Stage: umount Nov 12 17:40:52.286486 ignition[1441]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 17:40:52.289499 ignition[1441]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 12 17:40:52.289499 ignition[1441]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 12 17:40:52.295222 ignition[1441]: INFO : PUT result: OK Nov 12 17:40:52.300614 ignition[1441]: INFO : umount: umount passed Nov 12 17:40:52.300614 ignition[1441]: INFO : Ignition finished successfully Nov 12 17:40:52.306335 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 17:40:52.308249 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 17:40:52.317662 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 17:40:52.318923 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 17:40:52.319106 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 17:40:52.321976 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 17:40:52.322123 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 17:40:52.325791 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 17:40:52.325897 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 17:40:52.344419 systemd[1]: Stopped target network.target - Network. Nov 12 17:40:52.348218 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 17:40:52.348376 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 17:40:52.350900 systemd[1]: Stopped target paths.target - Path Units. Nov 12 17:40:52.352976 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 17:40:52.359517 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:40:52.362094 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 17:40:52.365062 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 17:40:52.371823 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 17:40:52.372340 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 17:40:52.376703 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 17:40:52.376820 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 17:40:52.386433 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 17:40:52.386613 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 17:40:52.390900 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 17:40:52.391044 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 17:40:52.394249 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 17:40:52.398493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 17:40:52.405286 systemd-networkd[1203]: eth0: DHCPv6 lease lost Nov 12 17:40:52.415204 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 17:40:52.415744 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 17:40:52.421521 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 17:40:52.421970 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 17:40:52.445269 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 17:40:52.445414 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:40:52.463924 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 17:40:52.469719 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 17:40:52.469874 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 17:40:52.472943 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 17:40:52.473099 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:52.477530 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 17:40:52.477737 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 17:40:52.483078 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 17:40:52.483292 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:40:52.506739 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:40:52.544494 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 17:40:52.547671 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 17:40:52.552836 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 17:40:52.553301 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:40:52.561922 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 17:40:52.562913 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 17:40:52.570283 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 17:40:52.570505 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 17:40:52.572688 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 17:40:52.572778 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:40:52.573485 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 17:40:52.573668 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 17:40:52.586933 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 17:40:52.587951 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 17:40:52.593373 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 17:40:52.593484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 17:40:52.598388 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 17:40:52.598547 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 17:40:52.617960 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 17:40:52.625870 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 17:40:52.626180 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:40:52.631879 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 17:40:52.632004 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:40:52.632633 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 17:40:52.632735 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:40:52.633330 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 17:40:52.633435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:52.675139 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 17:40:52.675593 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 17:40:52.683434 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 17:40:52.692959 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 17:40:52.724922 systemd[1]: Switching root. Nov 12 17:40:52.761931 systemd-journald[251]: Journal stopped Nov 12 17:40:54.786704 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Nov 12 17:40:54.786859 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 17:40:54.786910 kernel: SELinux: policy capability open_perms=1 Nov 12 17:40:54.786946 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 17:40:54.786979 kernel: SELinux: policy capability always_check_network=0 Nov 12 17:40:54.787013 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 17:40:54.787056 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 17:40:54.787092 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 17:40:54.787152 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 17:40:54.787185 kernel: audit: type=1403 audit(1731433253.048:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 17:40:54.787220 systemd[1]: Successfully loaded SELinux policy in 55.697ms. Nov 12 17:40:54.787263 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.622ms. Nov 12 17:40:54.787303 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 17:40:54.787338 systemd[1]: Detected virtualization amazon. Nov 12 17:40:54.787374 systemd[1]: Detected architecture arm64. Nov 12 17:40:54.787415 systemd[1]: Detected first boot. Nov 12 17:40:54.787451 systemd[1]: Initializing machine ID from VM UUID. Nov 12 17:40:54.787491 zram_generator::config[1484]: No configuration found. Nov 12 17:40:54.787540 systemd[1]: Populated /etc with preset unit settings. Nov 12 17:40:54.787939 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 17:40:54.787995 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 17:40:54.788035 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 17:40:54.788077 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 17:40:54.788133 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 17:40:54.788172 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 17:40:54.788208 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 17:40:54.788297 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 17:40:54.788334 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 17:40:54.788374 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 17:40:54.788407 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 17:40:54.788441 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 17:40:54.788485 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 17:40:54.788518 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 17:40:54.791763 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 17:40:54.791849 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 17:40:54.791887 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 17:40:54.791922 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 17:40:54.791957 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 17:40:54.791990 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 17:40:54.792025 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 17:40:54.792067 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 17:40:54.792105 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 17:40:54.792358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 17:40:54.792409 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 17:40:54.792448 systemd[1]: Reached target slices.target - Slice Units. Nov 12 17:40:54.792482 systemd[1]: Reached target swap.target - Swaps. Nov 12 17:40:54.792513 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 17:40:54.792622 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 17:40:54.792669 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 17:40:54.792702 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 17:40:54.792740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 17:40:54.792775 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 17:40:54.792813 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 17:40:54.792844 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 17:40:54.792878 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 17:40:54.792911 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 17:40:54.792942 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 17:40:54.792983 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 17:40:54.793456 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 17:40:54.800863 systemd[1]: Reached target machines.target - Containers. Nov 12 17:40:54.800919 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 17:40:54.800956 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:54.800993 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 17:40:54.801025 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 17:40:54.801057 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:40:54.801091 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:40:54.801239 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:40:54.801298 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 17:40:54.801335 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:40:54.801369 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 17:40:54.801403 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 17:40:54.801437 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 17:40:54.801488 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 17:40:54.801524 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 17:40:54.801649 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 17:40:54.801692 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 17:40:54.801726 kernel: fuse: init (API version 7.39) Nov 12 17:40:54.801764 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 17:40:54.801795 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 17:40:54.801828 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 17:40:54.801871 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 17:40:54.801913 systemd[1]: Stopped verity-setup.service. Nov 12 17:40:54.801948 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 17:40:54.802001 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 17:40:54.802037 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 17:40:54.802074 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 17:40:54.802112 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 17:40:54.802146 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 17:40:54.802189 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 17:40:54.802220 kernel: loop: module loaded Nov 12 17:40:54.802258 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 17:40:54.802302 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 17:40:54.802344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:40:54.802383 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:40:54.802425 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:40:54.802457 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:40:54.802491 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 17:40:54.802536 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 17:40:54.805648 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:40:54.805714 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:40:54.805761 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 17:40:54.805795 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 17:40:54.805890 systemd-journald[1559]: Collecting audit messages is disabled. Nov 12 17:40:54.805958 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 17:40:54.805994 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 17:40:54.806027 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:40:54.806058 systemd-journald[1559]: Journal started Nov 12 17:40:54.806110 systemd-journald[1559]: Runtime Journal (/run/log/journal/ec2530122e05ef1e9555f523d2fb5d19) is 8.0M, max 75.3M, 67.3M free. Nov 12 17:40:54.168102 systemd[1]: Queued start job for default target multi-user.target. Nov 12 17:40:54.198698 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 12 17:40:54.199738 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 17:40:54.827462 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 17:40:54.836073 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 17:40:54.836180 kernel: ACPI: bus type drm_connector registered Nov 12 17:40:54.842452 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:40:54.843062 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:40:54.846103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 17:40:54.856400 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 17:40:54.860943 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 17:40:54.864077 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 17:40:54.936688 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 17:40:54.936923 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 17:40:54.944406 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 17:40:54.955444 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Nov 12 17:40:54.955487 systemd-tmpfiles[1573]: ACLs are not supported, ignoring. Nov 12 17:40:54.959014 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 17:40:54.974893 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 17:40:54.977966 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:54.984913 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 17:40:54.991895 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 17:40:54.994400 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:40:55.000871 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 17:40:55.008082 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 17:40:55.015634 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 17:40:55.022734 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 17:40:55.026624 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 17:40:55.074601 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 17:40:55.086100 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 17:40:55.128849 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 17:40:55.133168 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 17:40:55.145988 systemd-journald[1559]: Time spent on flushing to /var/log/journal/ec2530122e05ef1e9555f523d2fb5d19 is 194.787ms for 915 entries. Nov 12 17:40:55.145988 systemd-journald[1559]: System Journal (/var/log/journal/ec2530122e05ef1e9555f523d2fb5d19) is 8.0M, max 195.6M, 187.6M free. Nov 12 17:40:55.379081 kernel: loop0: detected capacity change from 0 to 194096 Nov 12 17:40:55.379166 systemd-journald[1559]: Received client request to flush runtime journal. Nov 12 17:40:55.379242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 17:40:55.379285 kernel: loop1: detected capacity change from 0 to 52536 Nov 12 17:40:55.154104 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 17:40:55.267457 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 17:40:55.342705 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 17:40:55.360278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 17:40:55.369173 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 17:40:55.376250 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 17:40:55.390009 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 17:40:55.435511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 17:40:55.450104 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 17:40:55.470471 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Nov 12 17:40:55.470518 systemd-tmpfiles[1631]: ACLs are not supported, ignoring. Nov 12 17:40:55.482921 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 17:40:55.497833 kernel: loop2: detected capacity change from 0 to 114328 Nov 12 17:40:55.518048 udevadm[1637]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 17:40:55.558630 kernel: loop3: detected capacity change from 0 to 114432 Nov 12 17:40:55.627651 kernel: loop4: detected capacity change from 0 to 194096 Nov 12 17:40:55.675733 kernel: loop5: detected capacity change from 0 to 52536 Nov 12 17:40:55.708616 kernel: loop6: detected capacity change from 0 to 114328 Nov 12 17:40:55.745607 kernel: loop7: detected capacity change from 0 to 114432 Nov 12 17:40:55.777849 (sd-merge)[1643]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Nov 12 17:40:55.779399 (sd-merge)[1643]: Merged extensions into '/usr'. Nov 12 17:40:55.797337 systemd[1]: Reloading requested from client PID 1613 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 17:40:55.797417 systemd[1]: Reloading... Nov 12 17:40:56.134608 zram_generator::config[1669]: No configuration found. Nov 12 17:40:56.142301 ldconfig[1608]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 17:40:56.435804 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:40:56.559127 systemd[1]: Reloading finished in 759 ms. Nov 12 17:40:56.597040 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 17:40:56.604416 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 17:40:56.629252 systemd[1]: Starting ensure-sysext.service... Nov 12 17:40:56.642098 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 17:40:56.646914 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 17:40:56.670222 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 17:40:56.687551 systemd[1]: Reloading requested from client PID 1721 ('systemctl') (unit ensure-sysext.service)... Nov 12 17:40:56.687620 systemd[1]: Reloading... Nov 12 17:40:56.690023 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 17:40:56.691261 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 17:40:56.693339 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 17:40:56.694192 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Nov 12 17:40:56.694465 systemd-tmpfiles[1722]: ACLs are not supported, ignoring. Nov 12 17:40:56.702766 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:40:56.702788 systemd-tmpfiles[1722]: Skipping /boot Nov 12 17:40:56.728521 systemd-tmpfiles[1722]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 17:40:56.728943 systemd-tmpfiles[1722]: Skipping /boot Nov 12 17:40:56.830418 systemd-udevd[1725]: Using default interface naming scheme 'v255'. Nov 12 17:40:56.962164 zram_generator::config[1764]: No configuration found. Nov 12 17:40:57.112868 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1776) Nov 12 17:40:57.164430 (udev-worker)[1795]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:40:57.177634 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1776) Nov 12 17:40:57.396618 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1778) Nov 12 17:40:57.414401 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:40:57.572935 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 17:40:57.574212 systemd[1]: Reloading finished in 885 ms. Nov 12 17:40:57.620153 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 17:40:57.627729 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 17:40:57.714236 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:40:57.722925 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 17:40:57.730666 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 17:40:57.738643 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 17:40:57.748428 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 17:40:57.756076 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 17:40:57.790945 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 17:40:57.800892 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:57.809442 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 17:40:57.821310 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 17:40:57.827897 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 17:40:57.830305 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:57.855715 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 17:40:57.865088 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:57.865478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:57.877364 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 17:40:57.884901 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 17:40:57.890149 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 17:40:57.890641 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 17:40:57.928657 systemd[1]: Finished ensure-sysext.service. Nov 12 17:40:57.984798 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 17:40:57.985871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 17:40:57.994597 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 17:40:57.997149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 17:40:58.018175 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 17:40:58.047763 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 17:40:58.064774 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 17:40:58.068112 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 17:40:58.090256 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 12 17:40:58.094193 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 17:40:58.096148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 17:40:58.121678 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 17:40:58.125803 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 17:40:58.130001 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 17:40:58.136013 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 17:40:58.141319 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 17:40:58.143646 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 17:40:58.174831 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 17:40:58.179944 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 17:40:58.203862 augenrules[1957]: No rules Nov 12 17:40:58.210646 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:40:58.226932 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 17:40:58.245026 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 17:40:58.252249 lvm[1954]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:40:58.256743 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 17:40:58.341772 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 17:40:58.344666 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 17:40:58.357969 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 17:40:58.367792 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 17:40:58.404020 lvm[1975]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 17:40:58.429814 systemd-networkd[1919]: lo: Link UP Nov 12 17:40:58.429844 systemd-networkd[1919]: lo: Gained carrier Nov 12 17:40:58.434144 systemd-networkd[1919]: Enumeration completed Nov 12 17:40:58.434969 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 17:40:58.438540 systemd-networkd[1919]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:58.438768 systemd-networkd[1919]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 17:40:58.441940 systemd-networkd[1919]: eth0: Link UP Nov 12 17:40:58.442245 systemd-networkd[1919]: eth0: Gained carrier Nov 12 17:40:58.442279 systemd-networkd[1919]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 17:40:58.450048 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 17:40:58.460008 systemd-networkd[1919]: eth0: DHCPv4 address 172.31.24.62/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 12 17:40:58.492704 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 17:40:58.499479 systemd-resolved[1920]: Positive Trust Anchors: Nov 12 17:40:58.499536 systemd-resolved[1920]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 17:40:58.499675 systemd-resolved[1920]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 17:40:58.510144 systemd-resolved[1920]: Defaulting to hostname 'linux'. Nov 12 17:40:58.513861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 17:40:58.516411 systemd[1]: Reached target network.target - Network. Nov 12 17:40:58.518421 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 17:40:58.520908 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 17:40:58.523196 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 17:40:58.525776 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 17:40:58.528714 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 17:40:58.531304 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 17:40:58.534316 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 17:40:58.536922 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 17:40:58.536984 systemd[1]: Reached target paths.target - Path Units. Nov 12 17:40:58.538830 systemd[1]: Reached target timers.target - Timer Units. Nov 12 17:40:58.542831 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 17:40:58.547820 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 17:40:58.558497 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 17:40:58.562063 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 17:40:58.564849 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 17:40:58.566908 systemd[1]: Reached target basic.target - Basic System. Nov 12 17:40:58.568889 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:40:58.568946 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 17:40:58.575857 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 17:40:58.589175 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 17:40:58.597216 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 17:40:58.611783 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 17:40:58.620950 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 17:40:58.624811 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 17:40:58.634744 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 17:40:58.642226 systemd[1]: Started ntpd.service - Network Time Service. Nov 12 17:40:58.661182 jq[1985]: false Nov 12 17:40:58.655889 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 17:40:58.667223 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 12 17:40:58.677170 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 17:40:58.697815 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 17:40:58.713094 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 17:40:58.718852 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 17:40:58.720984 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 17:40:58.729438 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 17:40:58.739624 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 17:40:58.758340 dbus-daemon[1984]: [system] SELinux support is enabled Nov 12 17:40:58.763022 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 17:40:58.768326 dbus-daemon[1984]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1919 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 12 17:40:58.789212 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 17:40:58.790752 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 17:40:58.815224 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 17:40:58.828289 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 12 17:40:58.815348 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 17:40:58.821332 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 17:40:58.821376 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 17:40:58.851592 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 12 17:40:58.857719 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 17:40:58.860550 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 17:40:58.900040 jq[2000]: true Nov 12 17:40:58.968634 extend-filesystems[1986]: Found loop4 Nov 12 17:40:58.972372 extend-filesystems[1986]: Found loop5 Nov 12 17:40:58.972372 extend-filesystems[1986]: Found loop6 Nov 12 17:40:58.972372 extend-filesystems[1986]: Found loop7 Nov 12 17:40:58.972372 extend-filesystems[1986]: Found nvme0n1 Nov 12 17:40:58.972372 extend-filesystems[1986]: Found nvme0n1p1 Nov 12 17:40:58.972372 extend-filesystems[1986]: Found nvme0n1p2 Nov 12 17:40:58.972372 extend-filesystems[1986]: Found nvme0n1p3 Nov 12 17:40:58.990369 extend-filesystems[1986]: Found usr Nov 12 17:40:58.990369 extend-filesystems[1986]: Found nvme0n1p4 Nov 12 17:40:58.990369 extend-filesystems[1986]: Found nvme0n1p6 Nov 12 17:40:58.990369 extend-filesystems[1986]: Found nvme0n1p7 Nov 12 17:40:58.990369 extend-filesystems[1986]: Found nvme0n1p9 Nov 12 17:40:58.990369 extend-filesystems[1986]: Checking size of /dev/nvme0n1p9 Nov 12 17:40:58.999946 update_engine[1997]: I20241112 17:40:58.991884 1997 main.cc:92] Flatcar Update Engine starting Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:49:27 UTC 2024 (1): Starting Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: ---------------------------------------------------- Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: corporation. Support and training for ntp-4 are Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: available at https://www.nwtime.org/support Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: ---------------------------------------------------- Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: proto: precision = 0.096 usec (-23) Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: basedate set to 2024-10-31 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: gps base set to 2024-11-03 (week 2339) Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Listen normally on 3 eth0 172.31.24.62:123 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Listen normally on 4 lo [::1]:123 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: bind(21) AF_INET6 fe80::4b0:6fff:fed8:7531%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: unable to create socket on eth0 (5) for fe80::4b0:6fff:fed8:7531%2#123 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: failed to init interface for address fe80::4b0:6fff:fed8:7531%2 Nov 12 17:40:59.033946 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Nov 12 17:40:59.003081 ntpd[1988]: ntpd 4.2.8p17@1.4004-o Tue Nov 12 15:49:27 UTC 2024 (1): Starting Nov 12 17:40:59.016485 systemd[1]: Started update-engine.service - Update Engine. Nov 12 17:40:59.041398 update_engine[1997]: I20241112 17:40:59.019869 1997 update_check_scheduler.cc:74] Next update check in 10m41s Nov 12 17:40:59.003148 ntpd[1988]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 12 17:40:59.027014 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 17:40:59.065232 tar[2002]: linux-arm64/helm Nov 12 17:40:59.003169 ntpd[1988]: ---------------------------------------------------- Nov 12 17:40:59.033483 (ntainerd)[2017]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 17:40:59.078264 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:40:59.078264 ntpd[1988]: 12 Nov 17:40:59 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:40:59.003188 ntpd[1988]: ntp-4 is maintained by Network Time Foundation, Nov 12 17:40:59.047815 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 17:40:59.003206 ntpd[1988]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 12 17:40:59.088813 jq[2013]: true Nov 12 17:40:59.049211 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 17:40:59.003224 ntpd[1988]: corporation. Support and training for ntp-4 are Nov 12 17:40:59.096335 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 12 17:40:59.003242 ntpd[1988]: available at https://www.nwtime.org/support Nov 12 17:40:59.003261 ntpd[1988]: ---------------------------------------------------- Nov 12 17:40:59.008471 ntpd[1988]: proto: precision = 0.096 usec (-23) Nov 12 17:40:59.009526 ntpd[1988]: basedate set to 2024-10-31 Nov 12 17:40:59.009598 ntpd[1988]: gps base set to 2024-11-03 (week 2339) Nov 12 17:40:59.019789 ntpd[1988]: Listen and drop on 0 v6wildcard [::]:123 Nov 12 17:40:59.019874 ntpd[1988]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 12 17:40:59.031817 ntpd[1988]: Listen normally on 2 lo 127.0.0.1:123 Nov 12 17:40:59.031893 ntpd[1988]: Listen normally on 3 eth0 172.31.24.62:123 Nov 12 17:40:59.031958 ntpd[1988]: Listen normally on 4 lo [::1]:123 Nov 12 17:40:59.032034 ntpd[1988]: bind(21) AF_INET6 fe80::4b0:6fff:fed8:7531%2#123 flags 0x11 failed: Cannot assign requested address Nov 12 17:40:59.032072 ntpd[1988]: unable to create socket on eth0 (5) for fe80::4b0:6fff:fed8:7531%2#123 Nov 12 17:40:59.032105 ntpd[1988]: failed to init interface for address fe80::4b0:6fff:fed8:7531%2 Nov 12 17:40:59.032160 ntpd[1988]: Listening on routing socket on fd #21 for interface updates Nov 12 17:40:59.071318 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:40:59.071372 ntpd[1988]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 12 17:40:59.148602 extend-filesystems[1986]: Resized partition /dev/nvme0n1p9 Nov 12 17:40:59.162006 extend-filesystems[2039]: resize2fs 1.47.1 (20-May-2024) Nov 12 17:40:59.175671 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Nov 12 17:40:59.263107 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Nov 12 17:40:59.299148 extend-filesystems[2039]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 12 17:40:59.299148 extend-filesystems[2039]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 12 17:40:59.299148 extend-filesystems[2039]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Nov 12 17:40:59.326399 extend-filesystems[1986]: Resized filesystem in /dev/nvme0n1p9 Nov 12 17:40:59.313648 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 17:40:59.314087 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 17:40:59.344888 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 17:40:59.370448 coreos-metadata[1983]: Nov 12 17:40:59.369 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 17:40:59.370448 coreos-metadata[1983]: Nov 12 17:40:59.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 12 17:40:59.370448 coreos-metadata[1983]: Nov 12 17:40:59.370 INFO Fetch successful Nov 12 17:40:59.376674 coreos-metadata[1983]: Nov 12 17:40:59.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 12 17:40:59.376674 coreos-metadata[1983]: Nov 12 17:40:59.370 INFO Fetch successful Nov 12 17:40:59.376674 coreos-metadata[1983]: Nov 12 17:40:59.370 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 12 17:40:59.376674 coreos-metadata[1983]: Nov 12 17:40:59.371 INFO Fetch successful Nov 12 17:40:59.376674 coreos-metadata[1983]: Nov 12 17:40:59.371 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetch successful Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetch failed with 404: resource not found Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetch successful Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetch successful Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetch successful Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetch successful Nov 12 17:40:59.383136 coreos-metadata[1983]: Nov 12 17:40:59.379 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 12 17:40:59.385200 coreos-metadata[1983]: Nov 12 17:40:59.383 INFO Fetch successful Nov 12 17:40:59.504768 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:40:59.501477 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 17:40:59.518628 systemd[1]: Starting sshkeys.service... Nov 12 17:40:59.583026 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1795) Nov 12 17:40:59.592425 systemd-logind[1993]: Watching system buttons on /dev/input/event0 (Power Button) Nov 12 17:40:59.604770 systemd-logind[1993]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 12 17:40:59.608262 systemd-logind[1993]: New seat seat0. Nov 12 17:40:59.625238 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 17:40:59.671034 systemd-networkd[1919]: eth0: Gained IPv6LL Nov 12 17:40:59.805997 locksmithd[2027]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 17:40:59.827236 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 17:40:59.832530 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 17:40:59.843770 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 12 17:40:59.865244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:40:59.881354 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 17:40:59.886296 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 17:40:59.892527 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 17:40:59.928356 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 12 17:40:59.936302 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 12 17:40:59.961407 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 12 17:40:59.961738 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 12 17:40:59.972803 dbus-daemon[1984]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2008 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 12 17:41:00.007690 systemd[1]: Starting polkit.service - Authorization Manager... Nov 12 17:41:00.068093 containerd[2017]: time="2024-11-12T17:41:00.062174889Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 17:41:00.156606 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 17:41:00.163753 amazon-ssm-agent[2142]: Initializing new seelog logger Nov 12 17:41:00.176733 amazon-ssm-agent[2142]: New Seelog Logger Creation Complete Nov 12 17:41:00.176733 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.176733 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.170011 polkitd[2159]: Started polkitd version 121 Nov 12 17:41:00.177874 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 processing appconfig overrides Nov 12 17:41:00.190595 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.195679 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.195679 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 processing appconfig overrides Nov 12 17:41:00.195679 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.195679 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.195679 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 processing appconfig overrides Nov 12 17:41:00.195679 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO Proxy environment variables: Nov 12 17:41:00.218525 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.218525 amazon-ssm-agent[2142]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 12 17:41:00.218525 amazon-ssm-agent[2142]: 2024/11/12 17:41:00 processing appconfig overrides Nov 12 17:41:00.245973 polkitd[2159]: Loading rules from directory /etc/polkit-1/rules.d Nov 12 17:41:00.249412 polkitd[2159]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 12 17:41:00.272660 polkitd[2159]: Finished loading, compiling and executing 2 rules Nov 12 17:41:00.289279 dbus-daemon[1984]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 12 17:41:00.293770 systemd[1]: Started polkit.service - Authorization Manager. Nov 12 17:41:00.301150 polkitd[2159]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 12 17:41:00.314616 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO no_proxy: Nov 12 17:41:00.423685 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO https_proxy: Nov 12 17:41:00.482246 containerd[2017]: time="2024-11-12T17:41:00.482174015Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:00.489886 systemd-hostnamed[2008]: Hostname set to (transient) Nov 12 17:41:00.492010 containerd[2017]: time="2024-11-12T17:41:00.491912591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:00.492979 containerd[2017]: time="2024-11-12T17:41:00.492931007Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.494187239Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.494748431Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.494792603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.494946551Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.494979167Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.495340895Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.495377615Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.495414875Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:00.495888 containerd[2017]: time="2024-11-12T17:41:00.495440855Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:00.495662 systemd-resolved[1920]: System hostname changed to 'ip-172-31-24-62'. Nov 12 17:41:00.500948 containerd[2017]: time="2024-11-12T17:41:00.498367919Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:00.500948 containerd[2017]: time="2024-11-12T17:41:00.498975443Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 17:41:00.500948 containerd[2017]: time="2024-11-12T17:41:00.499274699Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 17:41:00.500948 containerd[2017]: time="2024-11-12T17:41:00.499314167Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 17:41:00.500948 containerd[2017]: time="2024-11-12T17:41:00.499547675Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 17:41:00.500948 containerd[2017]: time="2024-11-12T17:41:00.499744223Z" level=info msg="metadata content store policy set" policy=shared Nov 12 17:41:00.515339 containerd[2017]: time="2024-11-12T17:41:00.515255675Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 17:41:00.516768 containerd[2017]: time="2024-11-12T17:41:00.516700067Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 17:41:00.517037 containerd[2017]: time="2024-11-12T17:41:00.516990539Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 17:41:00.517173 containerd[2017]: time="2024-11-12T17:41:00.517143827Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 17:41:00.517319 containerd[2017]: time="2024-11-12T17:41:00.517288571Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 17:41:00.520684 containerd[2017]: time="2024-11-12T17:41:00.518893235Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 17:41:00.524245 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO http_proxy: Nov 12 17:41:00.524375 containerd[2017]: time="2024-11-12T17:41:00.522775427Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 17:41:00.524575 containerd[2017]: time="2024-11-12T17:41:00.524489195Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 17:41:00.524775 containerd[2017]: time="2024-11-12T17:41:00.524719571Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 17:41:00.524896 containerd[2017]: time="2024-11-12T17:41:00.524866559Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 17:41:00.525021 containerd[2017]: time="2024-11-12T17:41:00.524991791Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.525367 containerd[2017]: time="2024-11-12T17:41:00.525280475Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.526533 containerd[2017]: time="2024-11-12T17:41:00.526479263Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.526898387Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527211791Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527305415Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527338283Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527371727Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527418263Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527452511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527484539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.527592 containerd[2017]: time="2024-11-12T17:41:00.527517983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.531292 containerd[2017]: time="2024-11-12T17:41:00.527547947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.531292 containerd[2017]: time="2024-11-12T17:41:00.529174607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.531292 containerd[2017]: time="2024-11-12T17:41:00.529971155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.531292 containerd[2017]: time="2024-11-12T17:41:00.530022515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.531292 containerd[2017]: time="2024-11-12T17:41:00.530368775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.532642 containerd[2017]: time="2024-11-12T17:41:00.531237491Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.532993 containerd[2017]: time="2024-11-12T17:41:00.532887467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.533617 containerd[2017]: time="2024-11-12T17:41:00.533165447Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.533617 containerd[2017]: time="2024-11-12T17:41:00.533258291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.533617 containerd[2017]: time="2024-11-12T17:41:00.533309963Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 17:41:00.533617 containerd[2017]: time="2024-11-12T17:41:00.533396399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.533617 containerd[2017]: time="2024-11-12T17:41:00.533435267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.533617 containerd[2017]: time="2024-11-12T17:41:00.533466611Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 17:41:00.534879 containerd[2017]: time="2024-11-12T17:41:00.534175631Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 17:41:00.534879 containerd[2017]: time="2024-11-12T17:41:00.534258803Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 17:41:00.534879 containerd[2017]: time="2024-11-12T17:41:00.534293447Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 17:41:00.534879 containerd[2017]: time="2024-11-12T17:41:00.534329435Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 17:41:00.534879 containerd[2017]: time="2024-11-12T17:41:00.534361379Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.534879 containerd[2017]: time="2024-11-12T17:41:00.534392903Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 17:41:00.534879 containerd[2017]: time="2024-11-12T17:41:00.534418247Z" level=info msg="NRI interface is disabled by configuration." Nov 12 17:41:00.538817 containerd[2017]: time="2024-11-12T17:41:00.534444203Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 17:41:00.541649 containerd[2017]: time="2024-11-12T17:41:00.539618663Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 17:41:00.541649 containerd[2017]: time="2024-11-12T17:41:00.539767775Z" level=info msg="Connect containerd service" Nov 12 17:41:00.541649 containerd[2017]: time="2024-11-12T17:41:00.539859419Z" level=info msg="using legacy CRI server" Nov 12 17:41:00.541649 containerd[2017]: time="2024-11-12T17:41:00.539881007Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 17:41:00.541649 containerd[2017]: time="2024-11-12T17:41:00.540048059Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 17:41:00.545008 containerd[2017]: time="2024-11-12T17:41:00.544933547Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547445123Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547599467Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547703651Z" level=info msg="Start subscribing containerd event" Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547766375Z" level=info msg="Start recovering state" Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547890119Z" level=info msg="Start event monitor" Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547912907Z" level=info msg="Start snapshots syncer" Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547938887Z" level=info msg="Start cni network conf syncer for default" Nov 12 17:41:00.550828 containerd[2017]: time="2024-11-12T17:41:00.547957235Z" level=info msg="Start streaming server" Nov 12 17:41:00.548236 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 17:41:00.559820 containerd[2017]: time="2024-11-12T17:41:00.558460739Z" level=info msg="containerd successfully booted in 0.514386s" Nov 12 17:41:00.604075 coreos-metadata[2154]: Nov 12 17:41:00.602 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 12 17:41:00.604075 coreos-metadata[2154]: Nov 12 17:41:00.603 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 12 17:41:00.606422 coreos-metadata[2154]: Nov 12 17:41:00.604 INFO Fetch successful Nov 12 17:41:00.606422 coreos-metadata[2154]: Nov 12 17:41:00.604 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 12 17:41:00.606422 coreos-metadata[2154]: Nov 12 17:41:00.605 INFO Fetch successful Nov 12 17:41:00.615247 unknown[2154]: wrote ssh authorized keys file for user: core Nov 12 17:41:00.631096 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO Checking if agent identity type OnPrem can be assumed Nov 12 17:41:00.716078 update-ssh-keys[2202]: Updated "/home/core/.ssh/authorized_keys" Nov 12 17:41:00.720901 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 12 17:41:00.729316 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO Checking if agent identity type EC2 can be assumed Nov 12 17:41:00.741671 systemd[1]: Finished sshkeys.service. Nov 12 17:41:00.822746 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO Agent will take identity from EC2 Nov 12 17:41:00.922085 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:41:00.924639 sshd_keygen[2007]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 17:41:01.023219 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:41:01.040848 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 17:41:01.055989 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 17:41:01.068110 systemd[1]: Started sshd@0-172.31.24.62:22-139.178.89.65:53096.service - OpenSSH per-connection server daemon (139.178.89.65:53096). Nov 12 17:41:01.125393 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [amazon-ssm-agent] using named pipe channel for IPC Nov 12 17:41:01.122960 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 17:41:01.123324 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 17:41:01.150413 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 17:41:01.221804 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Nov 12 17:41:01.246801 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 17:41:01.261803 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 17:41:01.274245 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 17:41:01.277194 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 17:41:01.322671 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 12 17:41:01.389773 sshd[2216]: Accepted publickey for core from 139.178.89.65 port 53096 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:01.393075 sshd[2216]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:01.425727 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 17:41:01.429946 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [amazon-ssm-agent] Starting Core Agent Nov 12 17:41:01.437529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 17:41:01.458757 systemd-logind[1993]: New session 1 of user core. Nov 12 17:41:01.508710 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 17:41:01.515876 tar[2002]: linux-arm64/LICENSE Nov 12 17:41:01.516530 tar[2002]: linux-arm64/README.md Nov 12 17:41:01.527321 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 17:41:01.538416 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [amazon-ssm-agent] registrar detected. Attempting registration Nov 12 17:41:01.558965 (systemd)[2227]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 17:41:01.585676 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 17:41:01.637682 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [Registrar] Starting registrar module Nov 12 17:41:01.736420 amazon-ssm-agent[2142]: 2024-11-12 17:41:00 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Nov 12 17:41:01.869319 systemd[2227]: Queued start job for default target default.target. Nov 12 17:41:01.879260 systemd[2227]: Created slice app.slice - User Application Slice. Nov 12 17:41:01.879537 systemd[2227]: Reached target paths.target - Paths. Nov 12 17:41:01.879612 systemd[2227]: Reached target timers.target - Timers. Nov 12 17:41:01.890054 systemd[2227]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 17:41:01.917123 systemd[2227]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 17:41:01.917353 systemd[2227]: Reached target sockets.target - Sockets. Nov 12 17:41:01.917387 systemd[2227]: Reached target basic.target - Basic System. Nov 12 17:41:01.917488 systemd[2227]: Reached target default.target - Main User Target. Nov 12 17:41:01.917578 systemd[2227]: Startup finished in 332ms. Nov 12 17:41:01.918836 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 17:41:01.929018 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 17:41:02.003912 ntpd[1988]: Listen normally on 6 eth0 [fe80::4b0:6fff:fed8:7531%2]:123 Nov 12 17:41:02.005908 ntpd[1988]: 12 Nov 17:41:02 ntpd[1988]: Listen normally on 6 eth0 [fe80::4b0:6fff:fed8:7531%2]:123 Nov 12 17:41:02.025617 amazon-ssm-agent[2142]: 2024-11-12 17:41:02 INFO [EC2Identity] EC2 registration was successful. Nov 12 17:41:02.070605 amazon-ssm-agent[2142]: 2024-11-12 17:41:02 INFO [CredentialRefresher] credentialRefresher has started Nov 12 17:41:02.070605 amazon-ssm-agent[2142]: 2024-11-12 17:41:02 INFO [CredentialRefresher] Starting credentials refresher loop Nov 12 17:41:02.070605 amazon-ssm-agent[2142]: 2024-11-12 17:41:02 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 12 17:41:02.107752 systemd[1]: Started sshd@1-172.31.24.62:22-139.178.89.65:60866.service - OpenSSH per-connection server daemon (139.178.89.65:60866). Nov 12 17:41:02.127243 amazon-ssm-agent[2142]: 2024-11-12 17:41:02 INFO [CredentialRefresher] Next credential rotation will be in 30.7749923052 minutes Nov 12 17:41:02.308488 sshd[2241]: Accepted publickey for core from 139.178.89.65 port 60866 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:02.312027 sshd[2241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:02.323667 systemd-logind[1993]: New session 2 of user core. Nov 12 17:41:02.332947 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 17:41:02.471293 sshd[2241]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:02.479043 systemd[1]: sshd@1-172.31.24.62:22-139.178.89.65:60866.service: Deactivated successfully. Nov 12 17:41:02.486140 systemd[1]: session-2.scope: Deactivated successfully. Nov 12 17:41:02.494054 systemd-logind[1993]: Session 2 logged out. Waiting for processes to exit. Nov 12 17:41:02.518544 systemd[1]: Started sshd@2-172.31.24.62:22-139.178.89.65:60882.service - OpenSSH per-connection server daemon (139.178.89.65:60882). Nov 12 17:41:02.524007 systemd-logind[1993]: Removed session 2. Nov 12 17:41:02.711536 sshd[2248]: Accepted publickey for core from 139.178.89.65 port 60882 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:02.715798 sshd[2248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:02.747770 systemd-logind[1993]: New session 3 of user core. Nov 12 17:41:02.766717 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 17:41:02.773935 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:02.784321 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 17:41:02.790778 systemd[1]: Startup finished in 1.318s (kernel) + 8.228s (initrd) + 9.795s (userspace) = 19.343s. Nov 12 17:41:02.798145 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:02.909928 sshd[2248]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:02.914276 systemd[1]: session-3.scope: Deactivated successfully. Nov 12 17:41:02.916544 systemd[1]: sshd@2-172.31.24.62:22-139.178.89.65:60882.service: Deactivated successfully. Nov 12 17:41:02.922594 systemd-logind[1993]: Session 3 logged out. Waiting for processes to exit. Nov 12 17:41:02.926060 systemd-logind[1993]: Removed session 3. Nov 12 17:41:03.104365 amazon-ssm-agent[2142]: 2024-11-12 17:41:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 12 17:41:03.204933 amazon-ssm-agent[2142]: 2024-11-12 17:41:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2267) started Nov 12 17:41:03.305784 amazon-ssm-agent[2142]: 2024-11-12 17:41:03 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 12 17:41:04.233728 kubelet[2254]: E1112 17:41:04.233470 2254 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:04.238011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:04.238437 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:04.239275 systemd[1]: kubelet.service: Consumed 1.440s CPU time. Nov 12 17:41:05.519308 systemd-resolved[1920]: Clock change detected. Flushing caches. Nov 12 17:41:12.468810 systemd[1]: Started sshd@3-172.31.24.62:22-139.178.89.65:38436.service - OpenSSH per-connection server daemon (139.178.89.65:38436). Nov 12 17:41:12.645514 sshd[2284]: Accepted publickey for core from 139.178.89.65 port 38436 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:12.649826 sshd[2284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:12.661334 systemd-logind[1993]: New session 4 of user core. Nov 12 17:41:12.671278 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 17:41:12.806434 sshd[2284]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:12.816048 systemd[1]: sshd@3-172.31.24.62:22-139.178.89.65:38436.service: Deactivated successfully. Nov 12 17:41:12.820647 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 17:41:12.822751 systemd-logind[1993]: Session 4 logged out. Waiting for processes to exit. Nov 12 17:41:12.826110 systemd-logind[1993]: Removed session 4. Nov 12 17:41:12.846629 systemd[1]: Started sshd@4-172.31.24.62:22-139.178.89.65:38440.service - OpenSSH per-connection server daemon (139.178.89.65:38440). Nov 12 17:41:13.041860 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 38440 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:13.045084 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:13.055598 systemd-logind[1993]: New session 5 of user core. Nov 12 17:41:13.064660 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 17:41:13.188389 sshd[2291]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:13.194311 systemd-logind[1993]: Session 5 logged out. Waiting for processes to exit. Nov 12 17:41:13.195805 systemd[1]: sshd@4-172.31.24.62:22-139.178.89.65:38440.service: Deactivated successfully. Nov 12 17:41:13.199936 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 17:41:13.203973 systemd-logind[1993]: Removed session 5. Nov 12 17:41:13.235484 systemd[1]: Started sshd@5-172.31.24.62:22-139.178.89.65:38448.service - OpenSSH per-connection server daemon (139.178.89.65:38448). Nov 12 17:41:13.412704 sshd[2298]: Accepted publickey for core from 139.178.89.65 port 38448 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:13.415945 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:13.425130 systemd-logind[1993]: New session 6 of user core. Nov 12 17:41:13.437201 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 17:41:13.571751 sshd[2298]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:13.578283 systemd-logind[1993]: Session 6 logged out. Waiting for processes to exit. Nov 12 17:41:13.579743 systemd[1]: sshd@5-172.31.24.62:22-139.178.89.65:38448.service: Deactivated successfully. Nov 12 17:41:13.585366 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 17:41:13.589325 systemd-logind[1993]: Removed session 6. Nov 12 17:41:13.612510 systemd[1]: Started sshd@6-172.31.24.62:22-139.178.89.65:38450.service - OpenSSH per-connection server daemon (139.178.89.65:38450). Nov 12 17:41:13.766662 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 17:41:13.782744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:13.801260 sshd[2305]: Accepted publickey for core from 139.178.89.65 port 38450 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:13.807374 sshd[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:13.825990 systemd-logind[1993]: New session 7 of user core. Nov 12 17:41:13.830720 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 17:41:13.957049 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 17:41:13.957759 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:13.977663 sudo[2311]: pam_unix(sudo:session): session closed for user root Nov 12 17:41:14.001791 sshd[2305]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:14.013785 systemd[1]: sshd@6-172.31.24.62:22-139.178.89.65:38450.service: Deactivated successfully. Nov 12 17:41:14.019069 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 17:41:14.022384 systemd-logind[1993]: Session 7 logged out. Waiting for processes to exit. Nov 12 17:41:14.047062 systemd[1]: Started sshd@7-172.31.24.62:22-139.178.89.65:38458.service - OpenSSH per-connection server daemon (139.178.89.65:38458). Nov 12 17:41:14.049357 systemd-logind[1993]: Removed session 7. Nov 12 17:41:14.190665 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:14.205250 (kubelet)[2323]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:14.250777 sshd[2316]: Accepted publickey for core from 139.178.89.65 port 38458 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:14.253578 sshd[2316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:14.264755 systemd-logind[1993]: New session 8 of user core. Nov 12 17:41:14.274210 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 17:41:14.308449 kubelet[2323]: E1112 17:41:14.308369 2323 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:14.316892 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:14.317320 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:14.386616 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 17:41:14.388691 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:14.397731 sudo[2333]: pam_unix(sudo:session): session closed for user root Nov 12 17:41:14.411759 sudo[2332]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 17:41:14.412549 sudo[2332]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:14.440422 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 17:41:14.444632 auditctl[2336]: No rules Nov 12 17:41:14.445418 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 17:41:14.445933 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 17:41:14.461813 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 17:41:14.503311 augenrules[2354]: No rules Nov 12 17:41:14.505638 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 17:41:14.509431 sudo[2332]: pam_unix(sudo:session): session closed for user root Nov 12 17:41:14.533125 sshd[2316]: pam_unix(sshd:session): session closed for user core Nov 12 17:41:14.538825 systemd-logind[1993]: Session 8 logged out. Waiting for processes to exit. Nov 12 17:41:14.540045 systemd[1]: sshd@7-172.31.24.62:22-139.178.89.65:38458.service: Deactivated successfully. Nov 12 17:41:14.543232 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 17:41:14.546588 systemd-logind[1993]: Removed session 8. Nov 12 17:41:14.569490 systemd[1]: Started sshd@8-172.31.24.62:22-139.178.89.65:38462.service - OpenSSH per-connection server daemon (139.178.89.65:38462). Nov 12 17:41:14.754183 sshd[2362]: Accepted publickey for core from 139.178.89.65 port 38462 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:41:14.758425 sshd[2362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:41:14.769001 systemd-logind[1993]: New session 9 of user core. Nov 12 17:41:14.782710 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 17:41:14.889014 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 17:41:14.889757 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 17:41:15.401629 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 17:41:15.414720 (dockerd)[2381]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 17:41:15.798470 dockerd[2381]: time="2024-11-12T17:41:15.798246011Z" level=info msg="Starting up" Nov 12 17:41:15.956305 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport500540578-merged.mount: Deactivated successfully. Nov 12 17:41:15.987792 dockerd[2381]: time="2024-11-12T17:41:15.986826215Z" level=info msg="Loading containers: start." Nov 12 17:41:16.197958 kernel: Initializing XFRM netlink socket Nov 12 17:41:16.235222 (udev-worker)[2404]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:41:16.345066 systemd-networkd[1919]: docker0: Link UP Nov 12 17:41:16.370509 dockerd[2381]: time="2024-11-12T17:41:16.370427493Z" level=info msg="Loading containers: done." Nov 12 17:41:16.393384 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3508865375-merged.mount: Deactivated successfully. Nov 12 17:41:16.401387 dockerd[2381]: time="2024-11-12T17:41:16.401302690Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 17:41:16.401613 dockerd[2381]: time="2024-11-12T17:41:16.401467138Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 17:41:16.401726 dockerd[2381]: time="2024-11-12T17:41:16.401679298Z" level=info msg="Daemon has completed initialization" Nov 12 17:41:16.473312 dockerd[2381]: time="2024-11-12T17:41:16.471208774Z" level=info msg="API listen on /run/docker.sock" Nov 12 17:41:16.472571 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 17:41:17.627356 containerd[2017]: time="2024-11-12T17:41:17.627067740Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 17:41:18.364885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1025211556.mount: Deactivated successfully. Nov 12 17:41:20.281502 containerd[2017]: time="2024-11-12T17:41:20.281348953Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=29864215" Nov 12 17:41:20.282862 containerd[2017]: time="2024-11-12T17:41:20.282246637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:20.292717 containerd[2017]: time="2024-11-12T17:41:20.292534549Z" level=info msg="ImageCreate event name:\"sha256:6c71f76b696101728cbf70924bde859d444fb8016dfddc50303d11a31e8dae2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:20.297854 containerd[2017]: time="2024-11-12T17:41:20.297786181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:20.299047 containerd[2017]: time="2024-11-12T17:41:20.298942165Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:6c71f76b696101728cbf70924bde859d444fb8016dfddc50303d11a31e8dae2a\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"29861015\" in 2.671791781s" Nov 12 17:41:20.299222 containerd[2017]: time="2024-11-12T17:41:20.299049073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:6c71f76b696101728cbf70924bde859d444fb8016dfddc50303d11a31e8dae2a\"" Nov 12 17:41:20.348791 containerd[2017]: time="2024-11-12T17:41:20.348728161Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 17:41:22.166692 containerd[2017]: time="2024-11-12T17:41:22.166619954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:22.168814 containerd[2017]: time="2024-11-12T17:41:22.168733946Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=26901027" Nov 12 17:41:22.169855 containerd[2017]: time="2024-11-12T17:41:22.169296878Z" level=info msg="ImageCreate event name:\"sha256:b572f51d3f4ccb05e0b995272e61c33a99fdf709f605989ee64e93248e0ca60a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:22.175354 containerd[2017]: time="2024-11-12T17:41:22.175231550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:22.177839 containerd[2017]: time="2024-11-12T17:41:22.177609014Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:b572f51d3f4ccb05e0b995272e61c33a99fdf709f605989ee64e93248e0ca60a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"28303652\" in 1.828581465s" Nov 12 17:41:22.177839 containerd[2017]: time="2024-11-12T17:41:22.177669662Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:b572f51d3f4ccb05e0b995272e61c33a99fdf709f605989ee64e93248e0ca60a\"" Nov 12 17:41:22.218428 containerd[2017]: time="2024-11-12T17:41:22.218276066Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 17:41:23.784106 containerd[2017]: time="2024-11-12T17:41:23.783209262Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:23.785774 containerd[2017]: time="2024-11-12T17:41:23.785684454Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=16164692" Nov 12 17:41:23.786722 containerd[2017]: time="2024-11-12T17:41:23.786660078Z" level=info msg="ImageCreate event name:\"sha256:41769a7fc0b6741c0a2cc72b204685e278287051d0e65557d066a04781c38d95\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:23.797413 containerd[2017]: time="2024-11-12T17:41:23.797170710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:23.800709 containerd[2017]: time="2024-11-12T17:41:23.800469582Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:41769a7fc0b6741c0a2cc72b204685e278287051d0e65557d066a04781c38d95\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"17567335\" in 1.582127396s" Nov 12 17:41:23.800709 containerd[2017]: time="2024-11-12T17:41:23.800532438Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:41769a7fc0b6741c0a2cc72b204685e278287051d0e65557d066a04781c38d95\"" Nov 12 17:41:23.852501 containerd[2017]: time="2024-11-12T17:41:23.852441403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 17:41:24.492350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 17:41:24.501389 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:24.927463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:24.944240 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:25.103353 kubelet[2614]: E1112 17:41:25.102161 2614 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:25.109421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:25.109778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:25.510482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2056239793.mount: Deactivated successfully. Nov 12 17:41:26.203989 containerd[2017]: time="2024-11-12T17:41:26.203652870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:26.206003 containerd[2017]: time="2024-11-12T17:41:26.205820538Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=25660278" Nov 12 17:41:26.207505 containerd[2017]: time="2024-11-12T17:41:26.207326682Z" level=info msg="ImageCreate event name:\"sha256:95ea5eecb1c87350e3f1d3aa5e1e9aef277acc9b38dff12db3f7e97141ccb494\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:26.214296 containerd[2017]: time="2024-11-12T17:41:26.213315030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:26.215995 containerd[2017]: time="2024-11-12T17:41:26.215799390Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:95ea5eecb1c87350e3f1d3aa5e1e9aef277acc9b38dff12db3f7e97141ccb494\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"25659297\" in 2.363006567s" Nov 12 17:41:26.216260 containerd[2017]: time="2024-11-12T17:41:26.216007278Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:95ea5eecb1c87350e3f1d3aa5e1e9aef277acc9b38dff12db3f7e97141ccb494\"" Nov 12 17:41:26.287958 containerd[2017]: time="2024-11-12T17:41:26.287776639Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 17:41:26.840175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3361480961.mount: Deactivated successfully. Nov 12 17:41:28.020799 containerd[2017]: time="2024-11-12T17:41:28.020719555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:28.024118 containerd[2017]: time="2024-11-12T17:41:28.024050731Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Nov 12 17:41:28.026257 containerd[2017]: time="2024-11-12T17:41:28.026183803Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:28.033862 containerd[2017]: time="2024-11-12T17:41:28.033693511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:28.037713 containerd[2017]: time="2024-11-12T17:41:28.037440823Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.749557672s" Nov 12 17:41:28.037713 containerd[2017]: time="2024-11-12T17:41:28.037529419Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Nov 12 17:41:28.092359 containerd[2017]: time="2024-11-12T17:41:28.092058092Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 17:41:28.642769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2100150350.mount: Deactivated successfully. Nov 12 17:41:28.648559 containerd[2017]: time="2024-11-12T17:41:28.648478390Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:28.650765 containerd[2017]: time="2024-11-12T17:41:28.650699650Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Nov 12 17:41:28.652015 containerd[2017]: time="2024-11-12T17:41:28.651957478Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:28.655800 containerd[2017]: time="2024-11-12T17:41:28.655721854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:28.657985 containerd[2017]: time="2024-11-12T17:41:28.657461146Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 565.333454ms" Nov 12 17:41:28.657985 containerd[2017]: time="2024-11-12T17:41:28.657516994Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Nov 12 17:41:28.696250 containerd[2017]: time="2024-11-12T17:41:28.696188603Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 17:41:29.248418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1442216288.mount: Deactivated successfully. Nov 12 17:41:30.041794 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 12 17:41:33.078599 containerd[2017]: time="2024-11-12T17:41:33.078259068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:33.080508 containerd[2017]: time="2024-11-12T17:41:33.080454360Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Nov 12 17:41:33.081442 containerd[2017]: time="2024-11-12T17:41:33.080947908Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:33.087045 containerd[2017]: time="2024-11-12T17:41:33.086960460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:41:33.091149 containerd[2017]: time="2024-11-12T17:41:33.089446056Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.393191909s" Nov 12 17:41:33.091149 containerd[2017]: time="2024-11-12T17:41:33.089507208Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Nov 12 17:41:35.302009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 17:41:35.311672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:35.665768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:35.680502 (kubelet)[2802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 17:41:35.788277 kubelet[2802]: E1112 17:41:35.787864 2802 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 17:41:35.795784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 17:41:35.797559 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 17:41:40.435272 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:40.450863 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:40.497460 systemd[1]: Reloading requested from client PID 2815 ('systemctl') (unit session-9.scope)... Nov 12 17:41:40.497503 systemd[1]: Reloading... Nov 12 17:41:40.777023 zram_generator::config[2863]: No configuration found. Nov 12 17:41:41.066374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:41:41.267839 systemd[1]: Reloading finished in 769 ms. Nov 12 17:41:41.401958 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 17:41:41.402227 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 17:41:41.404056 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:41.421380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:42.042174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:42.043604 (kubelet)[2919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:41:42.139697 kubelet[2919]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:42.139697 kubelet[2919]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:41:42.139697 kubelet[2919]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:42.142140 kubelet[2919]: I1112 17:41:42.142009 2919 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:41:42.796076 kubelet[2919]: I1112 17:41:42.795398 2919 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 17:41:42.796076 kubelet[2919]: I1112 17:41:42.795450 2919 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:41:42.796413 kubelet[2919]: I1112 17:41:42.796130 2919 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 17:41:42.823945 kubelet[2919]: E1112 17:41:42.823850 2919 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.826095 kubelet[2919]: I1112 17:41:42.825756 2919 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:41:42.848879 kubelet[2919]: I1112 17:41:42.848122 2919 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:41:42.850999 kubelet[2919]: I1112 17:41:42.850885 2919 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:41:42.851530 kubelet[2919]: I1112 17:41:42.851194 2919 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-62","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:41:42.851849 kubelet[2919]: I1112 17:41:42.851816 2919 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:41:42.852131 kubelet[2919]: I1112 17:41:42.852090 2919 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:41:42.852699 kubelet[2919]: I1112 17:41:42.852650 2919 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:42.854655 kubelet[2919]: I1112 17:41:42.854557 2919 kubelet.go:400] "Attempting to sync node with API server" Nov 12 17:41:42.854985 kubelet[2919]: I1112 17:41:42.854950 2919 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:41:42.855979 kubelet[2919]: I1112 17:41:42.855224 2919 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:41:42.855979 kubelet[2919]: I1112 17:41:42.855321 2919 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:41:42.858063 kubelet[2919]: W1112 17:41:42.857971 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-62&limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.858356 kubelet[2919]: E1112 17:41:42.858320 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-62&limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.858874 kubelet[2919]: I1112 17:41:42.858823 2919 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:41:42.859671 kubelet[2919]: I1112 17:41:42.859627 2919 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:41:42.859976 kubelet[2919]: W1112 17:41:42.859892 2919 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 17:41:42.861715 kubelet[2919]: I1112 17:41:42.861662 2919 server.go:1264] "Started kubelet" Nov 12 17:41:42.862224 kubelet[2919]: W1112 17:41:42.862138 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.863473 kubelet[2919]: E1112 17:41:42.862440 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.873176 kubelet[2919]: E1112 17:41:42.872877 2919 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.24.62:6443/api/v1/namespaces/default/events\": dial tcp 172.31.24.62:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-24-62.1807496a59592499 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-24-62,UID:ip-172-31-24-62,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-24-62,},FirstTimestamp:2024-11-12 17:41:42.861620377 +0000 UTC m=+0.804006557,LastTimestamp:2024-11-12 17:41:42.861620377 +0000 UTC m=+0.804006557,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-24-62,}" Nov 12 17:41:42.874268 kubelet[2919]: I1112 17:41:42.874214 2919 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:41:42.874499 kubelet[2919]: I1112 17:41:42.874407 2919 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:41:42.877640 kubelet[2919]: I1112 17:41:42.877565 2919 server.go:455] "Adding debug handlers to kubelet server" Nov 12 17:41:42.879594 kubelet[2919]: I1112 17:41:42.879479 2919 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:41:42.880012 kubelet[2919]: I1112 17:41:42.879952 2919 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:41:42.882413 kubelet[2919]: E1112 17:41:42.882345 2919 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:41:42.887696 kubelet[2919]: E1112 17:41:42.886593 2919 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-24-62\" not found" Nov 12 17:41:42.887696 kubelet[2919]: I1112 17:41:42.886851 2919 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:41:42.887696 kubelet[2919]: I1112 17:41:42.887172 2919 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 17:41:42.890200 kubelet[2919]: I1112 17:41:42.890139 2919 reconciler.go:26] "Reconciler: start to sync state" Nov 12 17:41:42.892066 kubelet[2919]: W1112 17:41:42.891850 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.892457 kubelet[2919]: E1112 17:41:42.892412 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.893640 kubelet[2919]: E1112 17:41:42.892855 2919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-62?timeout=10s\": dial tcp 172.31.24.62:6443: connect: connection refused" interval="200ms" Nov 12 17:41:42.893640 kubelet[2919]: I1112 17:41:42.893276 2919 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:41:42.893640 kubelet[2919]: I1112 17:41:42.893435 2919 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:41:42.897175 kubelet[2919]: I1112 17:41:42.897115 2919 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:41:42.933800 kubelet[2919]: I1112 17:41:42.933732 2919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:41:42.938188 kubelet[2919]: I1112 17:41:42.938103 2919 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:41:42.938188 kubelet[2919]: I1112 17:41:42.938198 2919 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:41:42.938495 kubelet[2919]: I1112 17:41:42.938256 2919 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 17:41:42.938495 kubelet[2919]: E1112 17:41:42.938351 2919 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:41:42.945781 kubelet[2919]: W1112 17:41:42.945607 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.945781 kubelet[2919]: E1112 17:41:42.945684 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:42.952163 kubelet[2919]: I1112 17:41:42.952076 2919 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:41:42.952163 kubelet[2919]: I1112 17:41:42.952135 2919 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:41:42.952163 kubelet[2919]: I1112 17:41:42.952180 2919 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:42.955327 kubelet[2919]: I1112 17:41:42.955242 2919 policy_none.go:49] "None policy: Start" Nov 12 17:41:42.957105 kubelet[2919]: I1112 17:41:42.957052 2919 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:41:42.957105 kubelet[2919]: I1112 17:41:42.957108 2919 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:41:42.979850 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 17:41:42.992762 kubelet[2919]: I1112 17:41:42.992689 2919 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-62" Nov 12 17:41:42.994078 kubelet[2919]: E1112 17:41:42.993997 2919 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.62:6443/api/v1/nodes\": dial tcp 172.31.24.62:6443: connect: connection refused" node="ip-172-31-24-62" Nov 12 17:41:42.995060 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 17:41:43.003704 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 17:41:43.018743 kubelet[2919]: I1112 17:41:43.018666 2919 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:41:43.019756 kubelet[2919]: I1112 17:41:43.019668 2919 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 17:41:43.019931 kubelet[2919]: I1112 17:41:43.019861 2919 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:41:43.025114 kubelet[2919]: E1112 17:41:43.025046 2919 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-24-62\" not found" Nov 12 17:41:43.039130 kubelet[2919]: I1112 17:41:43.038860 2919 topology_manager.go:215] "Topology Admit Handler" podUID="00b2eb0e48ba07235032a849583892f0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-62" Nov 12 17:41:43.042329 kubelet[2919]: I1112 17:41:43.041844 2919 topology_manager.go:215] "Topology Admit Handler" podUID="2b9efa8f19505853aff56ce7ab9bd455" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-62" Nov 12 17:41:43.045659 kubelet[2919]: I1112 17:41:43.045590 2919 topology_manager.go:215] "Topology Admit Handler" podUID="7db7778dbe16d1043f4fc1454d591792" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:43.065735 systemd[1]: Created slice kubepods-burstable-pod00b2eb0e48ba07235032a849583892f0.slice - libcontainer container kubepods-burstable-pod00b2eb0e48ba07235032a849583892f0.slice. Nov 12 17:41:43.089957 systemd[1]: Created slice kubepods-burstable-pod2b9efa8f19505853aff56ce7ab9bd455.slice - libcontainer container kubepods-burstable-pod2b9efa8f19505853aff56ce7ab9bd455.slice. Nov 12 17:41:43.091766 kubelet[2919]: I1112 17:41:43.091709 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00b2eb0e48ba07235032a849583892f0-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-62\" (UID: \"00b2eb0e48ba07235032a849583892f0\") " pod="kube-system/kube-scheduler-ip-172-31-24-62" Nov 12 17:41:43.094427 kubelet[2919]: I1112 17:41:43.092489 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b9efa8f19505853aff56ce7ab9bd455-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-62\" (UID: \"2b9efa8f19505853aff56ce7ab9bd455\") " pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:43.094427 kubelet[2919]: I1112 17:41:43.092558 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:43.094427 kubelet[2919]: I1112 17:41:43.092601 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:43.094427 kubelet[2919]: I1112 17:41:43.092638 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:43.094427 kubelet[2919]: I1112 17:41:43.092713 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b9efa8f19505853aff56ce7ab9bd455-ca-certs\") pod \"kube-apiserver-ip-172-31-24-62\" (UID: \"2b9efa8f19505853aff56ce7ab9bd455\") " pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:43.094811 kubelet[2919]: I1112 17:41:43.092753 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b9efa8f19505853aff56ce7ab9bd455-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-62\" (UID: \"2b9efa8f19505853aff56ce7ab9bd455\") " pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:43.094811 kubelet[2919]: I1112 17:41:43.092791 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:43.094811 kubelet[2919]: I1112 17:41:43.092835 2919 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:43.094811 kubelet[2919]: E1112 17:41:43.094692 2919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-62?timeout=10s\": dial tcp 172.31.24.62:6443: connect: connection refused" interval="400ms" Nov 12 17:41:43.110766 systemd[1]: Created slice kubepods-burstable-pod7db7778dbe16d1043f4fc1454d591792.slice - libcontainer container kubepods-burstable-pod7db7778dbe16d1043f4fc1454d591792.slice. Nov 12 17:41:43.197980 kubelet[2919]: I1112 17:41:43.197891 2919 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-62" Nov 12 17:41:43.198934 kubelet[2919]: E1112 17:41:43.198375 2919 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.62:6443/api/v1/nodes\": dial tcp 172.31.24.62:6443: connect: connection refused" node="ip-172-31-24-62" Nov 12 17:41:43.381802 containerd[2017]: time="2024-11-12T17:41:43.381595920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-62,Uid:00b2eb0e48ba07235032a849583892f0,Namespace:kube-system,Attempt:0,}" Nov 12 17:41:43.407135 containerd[2017]: time="2024-11-12T17:41:43.406731228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-62,Uid:2b9efa8f19505853aff56ce7ab9bd455,Namespace:kube-system,Attempt:0,}" Nov 12 17:41:43.418565 containerd[2017]: time="2024-11-12T17:41:43.418111044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-62,Uid:7db7778dbe16d1043f4fc1454d591792,Namespace:kube-system,Attempt:0,}" Nov 12 17:41:43.495713 kubelet[2919]: E1112 17:41:43.495640 2919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-62?timeout=10s\": dial tcp 172.31.24.62:6443: connect: connection refused" interval="800ms" Nov 12 17:41:43.602006 kubelet[2919]: I1112 17:41:43.601880 2919 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-62" Nov 12 17:41:43.602577 kubelet[2919]: E1112 17:41:43.602499 2919 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.62:6443/api/v1/nodes\": dial tcp 172.31.24.62:6443: connect: connection refused" node="ip-172-31-24-62" Nov 12 17:41:43.931376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount174384446.mount: Deactivated successfully. Nov 12 17:41:43.941212 containerd[2017]: time="2024-11-12T17:41:43.941094578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.944800 containerd[2017]: time="2024-11-12T17:41:43.944742566Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:41:43.946400 containerd[2017]: time="2024-11-12T17:41:43.946236674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.947973 containerd[2017]: time="2024-11-12T17:41:43.947823302Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 17:41:43.951058 containerd[2017]: time="2024-11-12T17:41:43.950966138Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.951998 containerd[2017]: time="2024-11-12T17:41:43.951943070Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Nov 12 17:41:43.958426 containerd[2017]: time="2024-11-12T17:41:43.958239302Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.963302 containerd[2017]: time="2024-11-12T17:41:43.963101978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 17:41:43.964593 update_engine[1997]: I20241112 17:41:43.964151 1997 update_attempter.cc:509] Updating boot flags... Nov 12 17:41:43.966701 containerd[2017]: time="2024-11-12T17:41:43.966037214Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.802822ms" Nov 12 17:41:43.972628 containerd[2017]: time="2024-11-12T17:41:43.972500882Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.645814ms" Nov 12 17:41:43.979940 containerd[2017]: time="2024-11-12T17:41:43.979377531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 597.630855ms" Nov 12 17:41:43.991272 kubelet[2919]: W1112 17:41:43.990559 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.24.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:43.991272 kubelet[2919]: E1112 17:41:43.990665 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.24.62:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.134039 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2972) Nov 12 17:41:44.157871 kubelet[2919]: W1112 17:41:44.157694 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.24.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.157871 kubelet[2919]: E1112 17:41:44.157796 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.24.62:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.232843 kubelet[2919]: W1112 17:41:44.232521 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.24.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-62&limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.232843 kubelet[2919]: E1112 17:41:44.232660 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.24.62:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-24-62&limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.296609 kubelet[2919]: E1112 17:41:44.296518 2919 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.24.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-62?timeout=10s\": dial tcp 172.31.24.62:6443: connect: connection refused" interval="1.6s" Nov 12 17:41:44.324467 containerd[2017]: time="2024-11-12T17:41:44.324290544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:41:44.325251 containerd[2017]: time="2024-11-12T17:41:44.324933108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:41:44.325251 containerd[2017]: time="2024-11-12T17:41:44.324993324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:44.332603 kubelet[2919]: W1112 17:41:44.332431 2919 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.24.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.332603 kubelet[2919]: E1112 17:41:44.332562 2919 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.24.62:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.341950 containerd[2017]: time="2024-11-12T17:41:44.333419484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:41:44.341950 containerd[2017]: time="2024-11-12T17:41:44.333521244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:41:44.341950 containerd[2017]: time="2024-11-12T17:41:44.333558516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:44.341950 containerd[2017]: time="2024-11-12T17:41:44.333721740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:44.341950 containerd[2017]: time="2024-11-12T17:41:44.330117444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:44.385720 containerd[2017]: time="2024-11-12T17:41:44.382753093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:41:44.396115 containerd[2017]: time="2024-11-12T17:41:44.385254241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:41:44.396115 containerd[2017]: time="2024-11-12T17:41:44.385300453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:44.397074 containerd[2017]: time="2024-11-12T17:41:44.395415121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:41:44.408667 kubelet[2919]: I1112 17:41:44.408600 2919 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-62" Nov 12 17:41:44.410950 kubelet[2919]: E1112 17:41:44.409664 2919 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.24.62:6443/api/v1/nodes\": dial tcp 172.31.24.62:6443: connect: connection refused" node="ip-172-31-24-62" Nov 12 17:41:44.586800 systemd[1]: Started cri-containerd-4f88044b0218334d9e95c9f4d889e7e8c0d1db0fdbbad4d302b2708cc248579c.scope - libcontainer container 4f88044b0218334d9e95c9f4d889e7e8c0d1db0fdbbad4d302b2708cc248579c. Nov 12 17:41:44.608347 systemd[1]: Started cri-containerd-8c11381c758660deddec61d621ba94a93805dafff0f8502e84c8e473d840aea6.scope - libcontainer container 8c11381c758660deddec61d621ba94a93805dafff0f8502e84c8e473d840aea6. Nov 12 17:41:44.640227 systemd[1]: Started cri-containerd-4291c177aeea1bb6c50fe3010cd6ad69b3063f847cdf341fa381ef71aec50dbf.scope - libcontainer container 4291c177aeea1bb6c50fe3010cd6ad69b3063f847cdf341fa381ef71aec50dbf. Nov 12 17:41:44.725626 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (2974) Nov 12 17:41:44.894931 kubelet[2919]: E1112 17:41:44.894656 2919 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.24.62:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.24.62:6443: connect: connection refused Nov 12 17:41:44.909563 containerd[2017]: time="2024-11-12T17:41:44.909507771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-24-62,Uid:2b9efa8f19505853aff56ce7ab9bd455,Namespace:kube-system,Attempt:0,} returns sandbox id \"4f88044b0218334d9e95c9f4d889e7e8c0d1db0fdbbad4d302b2708cc248579c\"" Nov 12 17:41:44.949001 containerd[2017]: time="2024-11-12T17:41:44.944184087Z" level=info msg="CreateContainer within sandbox \"4f88044b0218334d9e95c9f4d889e7e8c0d1db0fdbbad4d302b2708cc248579c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 17:41:45.023406 containerd[2017]: time="2024-11-12T17:41:45.022889448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-24-62,Uid:7db7778dbe16d1043f4fc1454d591792,Namespace:kube-system,Attempt:0,} returns sandbox id \"4291c177aeea1bb6c50fe3010cd6ad69b3063f847cdf341fa381ef71aec50dbf\"" Nov 12 17:41:45.043791 containerd[2017]: time="2024-11-12T17:41:45.043716048Z" level=info msg="CreateContainer within sandbox \"4291c177aeea1bb6c50fe3010cd6ad69b3063f847cdf341fa381ef71aec50dbf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 17:41:45.074023 containerd[2017]: time="2024-11-12T17:41:45.073783728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-24-62,Uid:00b2eb0e48ba07235032a849583892f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c11381c758660deddec61d621ba94a93805dafff0f8502e84c8e473d840aea6\"" Nov 12 17:41:45.075506 containerd[2017]: time="2024-11-12T17:41:45.074416332Z" level=info msg="CreateContainer within sandbox \"4f88044b0218334d9e95c9f4d889e7e8c0d1db0fdbbad4d302b2708cc248579c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"934c4d4a4bb07edbf6100455af60094a92ff160e174d5c9b0450d913719442dc\"" Nov 12 17:41:45.085238 containerd[2017]: time="2024-11-12T17:41:45.085158564Z" level=info msg="StartContainer for \"934c4d4a4bb07edbf6100455af60094a92ff160e174d5c9b0450d913719442dc\"" Nov 12 17:41:45.093082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2125953482.mount: Deactivated successfully. Nov 12 17:41:45.116002 containerd[2017]: time="2024-11-12T17:41:45.109165812Z" level=info msg="CreateContainer within sandbox \"8c11381c758660deddec61d621ba94a93805dafff0f8502e84c8e473d840aea6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 17:41:45.126958 containerd[2017]: time="2024-11-12T17:41:45.124241784Z" level=info msg="CreateContainer within sandbox \"4291c177aeea1bb6c50fe3010cd6ad69b3063f847cdf341fa381ef71aec50dbf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7\"" Nov 12 17:41:45.129291 containerd[2017]: time="2024-11-12T17:41:45.127969500Z" level=info msg="StartContainer for \"7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7\"" Nov 12 17:41:45.177746 containerd[2017]: time="2024-11-12T17:41:45.177299796Z" level=info msg="CreateContainer within sandbox \"8c11381c758660deddec61d621ba94a93805dafff0f8502e84c8e473d840aea6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d\"" Nov 12 17:41:45.183429 containerd[2017]: time="2024-11-12T17:41:45.183339049Z" level=info msg="StartContainer for \"e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d\"" Nov 12 17:41:45.245396 systemd[1]: Started cri-containerd-934c4d4a4bb07edbf6100455af60094a92ff160e174d5c9b0450d913719442dc.scope - libcontainer container 934c4d4a4bb07edbf6100455af60094a92ff160e174d5c9b0450d913719442dc. Nov 12 17:41:45.267326 systemd[1]: Started cri-containerd-7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7.scope - libcontainer container 7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7. Nov 12 17:41:45.323425 systemd[1]: Started cri-containerd-e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d.scope - libcontainer container e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d. Nov 12 17:41:45.430730 containerd[2017]: time="2024-11-12T17:41:45.430447682Z" level=info msg="StartContainer for \"934c4d4a4bb07edbf6100455af60094a92ff160e174d5c9b0450d913719442dc\" returns successfully" Nov 12 17:41:45.489669 containerd[2017]: time="2024-11-12T17:41:45.489595454Z" level=info msg="StartContainer for \"7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7\" returns successfully" Nov 12 17:41:45.537695 containerd[2017]: time="2024-11-12T17:41:45.537631946Z" level=info msg="StartContainer for \"e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d\" returns successfully" Nov 12 17:41:46.019715 kubelet[2919]: I1112 17:41:46.019638 2919 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-62" Nov 12 17:41:49.480239 kubelet[2919]: E1112 17:41:49.480144 2919 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-24-62\" not found" node="ip-172-31-24-62" Nov 12 17:41:49.530447 kubelet[2919]: I1112 17:41:49.530290 2919 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-62" Nov 12 17:41:49.861212 kubelet[2919]: I1112 17:41:49.861150 2919 apiserver.go:52] "Watching apiserver" Nov 12 17:41:49.887970 kubelet[2919]: I1112 17:41:49.887691 2919 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 17:41:52.069121 systemd[1]: Reloading requested from client PID 3378 ('systemctl') (unit session-9.scope)... Nov 12 17:41:52.069178 systemd[1]: Reloading... Nov 12 17:41:52.341040 zram_generator::config[3421]: No configuration found. Nov 12 17:41:52.611517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 17:41:52.824239 systemd[1]: Reloading finished in 753 ms. Nov 12 17:41:52.920522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:52.938659 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 17:41:52.939390 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:52.939540 systemd[1]: kubelet.service: Consumed 1.742s CPU time, 113.4M memory peak, 0B memory swap peak. Nov 12 17:41:52.949722 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 17:41:53.342282 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 17:41:53.349265 (kubelet)[3479]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 17:41:53.466866 kubelet[3479]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:53.466866 kubelet[3479]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 17:41:53.466866 kubelet[3479]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 17:41:53.467614 kubelet[3479]: I1112 17:41:53.466988 3479 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 17:41:53.480598 kubelet[3479]: I1112 17:41:53.480510 3479 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 17:41:53.480598 kubelet[3479]: I1112 17:41:53.480575 3479 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 17:41:53.481158 kubelet[3479]: I1112 17:41:53.481107 3479 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 17:41:53.485996 kubelet[3479]: I1112 17:41:53.485262 3479 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 17:41:53.488255 kubelet[3479]: I1112 17:41:53.487838 3479 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 17:41:53.513788 kubelet[3479]: I1112 17:41:53.513656 3479 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 17:41:53.515360 kubelet[3479]: I1112 17:41:53.515068 3479 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 17:41:53.516260 kubelet[3479]: I1112 17:41:53.515267 3479 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-24-62","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 17:41:53.516260 kubelet[3479]: I1112 17:41:53.515876 3479 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 17:41:53.516260 kubelet[3479]: I1112 17:41:53.515978 3479 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 17:41:53.516260 kubelet[3479]: I1112 17:41:53.516051 3479 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:53.519004 kubelet[3479]: I1112 17:41:53.518959 3479 kubelet.go:400] "Attempting to sync node with API server" Nov 12 17:41:53.521158 kubelet[3479]: I1112 17:41:53.520981 3479 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 17:41:53.521662 kubelet[3479]: I1112 17:41:53.521449 3479 kubelet.go:312] "Adding apiserver pod source" Nov 12 17:41:53.521662 kubelet[3479]: I1112 17:41:53.521493 3479 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 17:41:53.528929 kubelet[3479]: I1112 17:41:53.526399 3479 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 17:41:53.528929 kubelet[3479]: I1112 17:41:53.526814 3479 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 17:41:53.530711 kubelet[3479]: I1112 17:41:53.530660 3479 server.go:1264] "Started kubelet" Nov 12 17:41:53.539129 kubelet[3479]: I1112 17:41:53.532781 3479 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 17:41:53.541934 kubelet[3479]: I1112 17:41:53.541118 3479 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 17:41:53.547037 kubelet[3479]: I1112 17:41:53.546985 3479 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 17:41:53.568547 kubelet[3479]: I1112 17:41:53.568507 3479 server.go:455] "Adding debug handlers to kubelet server" Nov 12 17:41:53.580681 kubelet[3479]: I1112 17:41:53.561471 3479 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 17:41:53.597020 kubelet[3479]: I1112 17:41:53.595411 3479 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 17:41:53.604921 kubelet[3479]: I1112 17:41:53.604567 3479 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 17:41:53.606627 kubelet[3479]: I1112 17:41:53.606351 3479 reconciler.go:26] "Reconciler: start to sync state" Nov 12 17:41:53.612623 kubelet[3479]: I1112 17:41:53.612567 3479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 17:41:53.617773 kubelet[3479]: I1112 17:41:53.617077 3479 factory.go:221] Registration of the systemd container factory successfully Nov 12 17:41:53.617773 kubelet[3479]: I1112 17:41:53.617484 3479 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 17:41:53.621437 kubelet[3479]: I1112 17:41:53.620371 3479 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 17:41:53.621437 kubelet[3479]: I1112 17:41:53.620439 3479 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 17:41:53.621437 kubelet[3479]: I1112 17:41:53.620469 3479 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 17:41:53.621437 kubelet[3479]: E1112 17:41:53.620736 3479 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 17:41:53.625337 kubelet[3479]: E1112 17:41:53.624837 3479 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 17:41:53.636773 kubelet[3479]: I1112 17:41:53.636702 3479 factory.go:221] Registration of the containerd container factory successfully Nov 12 17:41:53.715645 kubelet[3479]: I1112 17:41:53.715548 3479 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-24-62" Nov 12 17:41:53.721163 kubelet[3479]: E1112 17:41:53.721086 3479 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 17:41:53.757344 kubelet[3479]: I1112 17:41:53.757280 3479 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-24-62" Nov 12 17:41:53.757524 kubelet[3479]: I1112 17:41:53.757423 3479 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-24-62" Nov 12 17:41:53.844474 kubelet[3479]: I1112 17:41:53.843417 3479 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 17:41:53.844474 kubelet[3479]: I1112 17:41:53.843448 3479 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 17:41:53.844474 kubelet[3479]: I1112 17:41:53.843483 3479 state_mem.go:36] "Initialized new in-memory state store" Nov 12 17:41:53.846692 kubelet[3479]: I1112 17:41:53.846280 3479 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 17:41:53.846692 kubelet[3479]: I1112 17:41:53.846362 3479 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 17:41:53.846692 kubelet[3479]: I1112 17:41:53.846480 3479 policy_none.go:49] "None policy: Start" Nov 12 17:41:53.853464 kubelet[3479]: I1112 17:41:53.849328 3479 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 17:41:53.853464 kubelet[3479]: I1112 17:41:53.849380 3479 state_mem.go:35] "Initializing new in-memory state store" Nov 12 17:41:53.853464 kubelet[3479]: I1112 17:41:53.849738 3479 state_mem.go:75] "Updated machine memory state" Nov 12 17:41:53.879397 kubelet[3479]: I1112 17:41:53.879353 3479 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 17:41:53.883889 kubelet[3479]: I1112 17:41:53.883812 3479 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 17:41:53.884512 kubelet[3479]: I1112 17:41:53.884476 3479 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 17:41:53.921659 kubelet[3479]: I1112 17:41:53.921234 3479 topology_manager.go:215] "Topology Admit Handler" podUID="2b9efa8f19505853aff56ce7ab9bd455" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-24-62" Nov 12 17:41:53.924803 kubelet[3479]: I1112 17:41:53.923566 3479 topology_manager.go:215] "Topology Admit Handler" podUID="7db7778dbe16d1043f4fc1454d591792" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:53.927617 kubelet[3479]: I1112 17:41:53.927155 3479 topology_manager.go:215] "Topology Admit Handler" podUID="00b2eb0e48ba07235032a849583892f0" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-24-62" Nov 12 17:41:53.961040 kubelet[3479]: E1112 17:41:53.959777 3479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-24-62\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:53.965878 kubelet[3479]: E1112 17:41:53.965658 3479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-62\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:54.011020 kubelet[3479]: I1112 17:41:54.010396 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2b9efa8f19505853aff56ce7ab9bd455-k8s-certs\") pod \"kube-apiserver-ip-172-31-24-62\" (UID: \"2b9efa8f19505853aff56ce7ab9bd455\") " pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:54.011020 kubelet[3479]: I1112 17:41:54.010460 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2b9efa8f19505853aff56ce7ab9bd455-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-24-62\" (UID: \"2b9efa8f19505853aff56ce7ab9bd455\") " pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:54.011020 kubelet[3479]: I1112 17:41:54.010507 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-kubeconfig\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:54.011020 kubelet[3479]: I1112 17:41:54.010685 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/00b2eb0e48ba07235032a849583892f0-kubeconfig\") pod \"kube-scheduler-ip-172-31-24-62\" (UID: \"00b2eb0e48ba07235032a849583892f0\") " pod="kube-system/kube-scheduler-ip-172-31-24-62" Nov 12 17:41:54.011020 kubelet[3479]: I1112 17:41:54.010751 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2b9efa8f19505853aff56ce7ab9bd455-ca-certs\") pod \"kube-apiserver-ip-172-31-24-62\" (UID: \"2b9efa8f19505853aff56ce7ab9bd455\") " pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:54.011656 kubelet[3479]: I1112 17:41:54.010808 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-ca-certs\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:54.011656 kubelet[3479]: I1112 17:41:54.010853 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:54.011656 kubelet[3479]: I1112 17:41:54.010950 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-k8s-certs\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:54.011656 kubelet[3479]: I1112 17:41:54.011036 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7db7778dbe16d1043f4fc1454d591792-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-24-62\" (UID: \"7db7778dbe16d1043f4fc1454d591792\") " pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:54.526525 kubelet[3479]: I1112 17:41:54.526346 3479 apiserver.go:52] "Watching apiserver" Nov 12 17:41:54.605973 kubelet[3479]: I1112 17:41:54.605823 3479 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 17:41:54.735441 kubelet[3479]: E1112 17:41:54.734526 3479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-24-62\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-24-62" Nov 12 17:41:54.738686 kubelet[3479]: E1112 17:41:54.738598 3479 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-24-62\" already exists" pod="kube-system/kube-apiserver-ip-172-31-24-62" Nov 12 17:41:54.765858 kubelet[3479]: I1112 17:41:54.765690 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-24-62" podStartSLOduration=3.765662304 podStartE2EDuration="3.765662304s" podCreationTimestamp="2024-11-12 17:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:41:54.764859492 +0000 UTC m=+1.405931120" watchObservedRunningTime="2024-11-12 17:41:54.765662304 +0000 UTC m=+1.406733920" Nov 12 17:41:54.767307 kubelet[3479]: I1112 17:41:54.767151 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-24-62" podStartSLOduration=3.767122488 podStartE2EDuration="3.767122488s" podCreationTimestamp="2024-11-12 17:41:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:41:54.735639516 +0000 UTC m=+1.376711096" watchObservedRunningTime="2024-11-12 17:41:54.767122488 +0000 UTC m=+1.408194068" Nov 12 17:41:54.838942 kubelet[3479]: I1112 17:41:54.838539 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-24-62" podStartSLOduration=1.838279656 podStartE2EDuration="1.838279656s" podCreationTimestamp="2024-11-12 17:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:41:54.783676632 +0000 UTC m=+1.424748212" watchObservedRunningTime="2024-11-12 17:41:54.838279656 +0000 UTC m=+1.479351284" Nov 12 17:42:01.200144 sudo[2365]: pam_unix(sudo:session): session closed for user root Nov 12 17:42:01.224080 sshd[2362]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:01.231471 systemd[1]: sshd@8-172.31.24.62:22-139.178.89.65:38462.service: Deactivated successfully. Nov 12 17:42:01.240346 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 17:42:01.241250 systemd[1]: session-9.scope: Consumed 11.341s CPU time, 188.3M memory peak, 0B memory swap peak. Nov 12 17:42:01.243507 systemd-logind[1993]: Session 9 logged out. Waiting for processes to exit. Nov 12 17:42:01.248505 systemd-logind[1993]: Removed session 9. Nov 12 17:42:07.191935 kubelet[3479]: I1112 17:42:07.191386 3479 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 17:42:07.192560 containerd[2017]: time="2024-11-12T17:42:07.192087154Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 17:42:07.193170 kubelet[3479]: I1112 17:42:07.192549 3479 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 17:42:07.965394 kubelet[3479]: I1112 17:42:07.965269 3479 topology_manager.go:215] "Topology Admit Handler" podUID="136f0c70-64d5-4ab2-baf6-2f3b38bb901f" podNamespace="kube-system" podName="kube-proxy-4kjvb" Nov 12 17:42:07.991680 systemd[1]: Created slice kubepods-besteffort-pod136f0c70_64d5_4ab2_baf6_2f3b38bb901f.slice - libcontainer container kubepods-besteffort-pod136f0c70_64d5_4ab2_baf6_2f3b38bb901f.slice. Nov 12 17:42:08.012086 kubelet[3479]: I1112 17:42:08.011828 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphzh\" (UniqueName: \"kubernetes.io/projected/136f0c70-64d5-4ab2-baf6-2f3b38bb901f-kube-api-access-qphzh\") pod \"kube-proxy-4kjvb\" (UID: \"136f0c70-64d5-4ab2-baf6-2f3b38bb901f\") " pod="kube-system/kube-proxy-4kjvb" Nov 12 17:42:08.012086 kubelet[3479]: I1112 17:42:08.011925 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/136f0c70-64d5-4ab2-baf6-2f3b38bb901f-kube-proxy\") pod \"kube-proxy-4kjvb\" (UID: \"136f0c70-64d5-4ab2-baf6-2f3b38bb901f\") " pod="kube-system/kube-proxy-4kjvb" Nov 12 17:42:08.012086 kubelet[3479]: I1112 17:42:08.011970 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/136f0c70-64d5-4ab2-baf6-2f3b38bb901f-xtables-lock\") pod \"kube-proxy-4kjvb\" (UID: \"136f0c70-64d5-4ab2-baf6-2f3b38bb901f\") " pod="kube-system/kube-proxy-4kjvb" Nov 12 17:42:08.012086 kubelet[3479]: I1112 17:42:08.012006 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/136f0c70-64d5-4ab2-baf6-2f3b38bb901f-lib-modules\") pod \"kube-proxy-4kjvb\" (UID: \"136f0c70-64d5-4ab2-baf6-2f3b38bb901f\") " pod="kube-system/kube-proxy-4kjvb" Nov 12 17:42:08.308536 containerd[2017]: time="2024-11-12T17:42:08.308449175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kjvb,Uid:136f0c70-64d5-4ab2-baf6-2f3b38bb901f,Namespace:kube-system,Attempt:0,}" Nov 12 17:42:08.400669 containerd[2017]: time="2024-11-12T17:42:08.400169652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:08.400669 containerd[2017]: time="2024-11-12T17:42:08.400285956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:08.400669 containerd[2017]: time="2024-11-12T17:42:08.400330308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:08.400669 containerd[2017]: time="2024-11-12T17:42:08.400511712Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:08.462268 systemd[1]: Started cri-containerd-e49875992b0654233003435c5027bc7763a38addfd55d197d035a88789f14322.scope - libcontainer container e49875992b0654233003435c5027bc7763a38addfd55d197d035a88789f14322. Nov 12 17:42:08.476229 kubelet[3479]: I1112 17:42:08.475892 3479 topology_manager.go:215] "Topology Admit Handler" podUID="26af59d0-71df-40de-b80b-7421f635030d" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-76gpt" Nov 12 17:42:08.496817 systemd[1]: Created slice kubepods-besteffort-pod26af59d0_71df_40de_b80b_7421f635030d.slice - libcontainer container kubepods-besteffort-pod26af59d0_71df_40de_b80b_7421f635030d.slice. Nov 12 17:42:08.517305 kubelet[3479]: I1112 17:42:08.517191 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/26af59d0-71df-40de-b80b-7421f635030d-var-lib-calico\") pod \"tigera-operator-5645cfc98-76gpt\" (UID: \"26af59d0-71df-40de-b80b-7421f635030d\") " pod="tigera-operator/tigera-operator-5645cfc98-76gpt" Nov 12 17:42:08.517305 kubelet[3479]: I1112 17:42:08.517314 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmjk\" (UniqueName: \"kubernetes.io/projected/26af59d0-71df-40de-b80b-7421f635030d-kube-api-access-kfmjk\") pod \"tigera-operator-5645cfc98-76gpt\" (UID: \"26af59d0-71df-40de-b80b-7421f635030d\") " pod="tigera-operator/tigera-operator-5645cfc98-76gpt" Nov 12 17:42:08.596632 containerd[2017]: time="2024-11-12T17:42:08.596415733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4kjvb,Uid:136f0c70-64d5-4ab2-baf6-2f3b38bb901f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e49875992b0654233003435c5027bc7763a38addfd55d197d035a88789f14322\"" Nov 12 17:42:08.603268 containerd[2017]: time="2024-11-12T17:42:08.603157525Z" level=info msg="CreateContainer within sandbox \"e49875992b0654233003435c5027bc7763a38addfd55d197d035a88789f14322\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 17:42:08.631747 containerd[2017]: time="2024-11-12T17:42:08.631244101Z" level=info msg="CreateContainer within sandbox \"e49875992b0654233003435c5027bc7763a38addfd55d197d035a88789f14322\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4dc59789c33d1b6c9757a1f470372604461559243e00cf0fe2e497ff5bd9a2cb\"" Nov 12 17:42:08.636048 containerd[2017]: time="2024-11-12T17:42:08.634553173Z" level=info msg="StartContainer for \"4dc59789c33d1b6c9757a1f470372604461559243e00cf0fe2e497ff5bd9a2cb\"" Nov 12 17:42:08.708670 systemd[1]: Started cri-containerd-4dc59789c33d1b6c9757a1f470372604461559243e00cf0fe2e497ff5bd9a2cb.scope - libcontainer container 4dc59789c33d1b6c9757a1f470372604461559243e00cf0fe2e497ff5bd9a2cb. Nov 12 17:42:08.769757 containerd[2017]: time="2024-11-12T17:42:08.769675946Z" level=info msg="StartContainer for \"4dc59789c33d1b6c9757a1f470372604461559243e00cf0fe2e497ff5bd9a2cb\" returns successfully" Nov 12 17:42:08.804084 containerd[2017]: time="2024-11-12T17:42:08.804007562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-76gpt,Uid:26af59d0-71df-40de-b80b-7421f635030d,Namespace:tigera-operator,Attempt:0,}" Nov 12 17:42:08.875518 containerd[2017]: time="2024-11-12T17:42:08.874672706Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:08.875518 containerd[2017]: time="2024-11-12T17:42:08.874826738Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:08.875518 containerd[2017]: time="2024-11-12T17:42:08.874855838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:08.876452 containerd[2017]: time="2024-11-12T17:42:08.876133478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:08.931136 systemd[1]: Started cri-containerd-bae47eee4f060c0818bb66b99ab44e4043f7b5366be583ef49ee39c9b5ffef84.scope - libcontainer container bae47eee4f060c0818bb66b99ab44e4043f7b5366be583ef49ee39c9b5ffef84. Nov 12 17:42:09.060601 containerd[2017]: time="2024-11-12T17:42:09.059324435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-76gpt,Uid:26af59d0-71df-40de-b80b-7421f635030d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"bae47eee4f060c0818bb66b99ab44e4043f7b5366be583ef49ee39c9b5ffef84\"" Nov 12 17:42:09.073416 containerd[2017]: time="2024-11-12T17:42:09.072964499Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 17:42:11.280272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2353692530.mount: Deactivated successfully. Nov 12 17:42:11.991825 containerd[2017]: time="2024-11-12T17:42:11.991753194Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:11.994388 containerd[2017]: time="2024-11-12T17:42:11.994292658Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=19123649" Nov 12 17:42:11.994740 containerd[2017]: time="2024-11-12T17:42:11.994543830Z" level=info msg="ImageCreate event name:\"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:11.999400 containerd[2017]: time="2024-11-12T17:42:11.999316590Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:12.001633 containerd[2017]: time="2024-11-12T17:42:12.001562102Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"19117824\" in 2.928488979s" Nov 12 17:42:12.001633 containerd[2017]: time="2024-11-12T17:42:12.001627754Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:43f5078c762aa5421f1f6830afd7f91e05937aac6b1d97f0516065571164e9ee\"" Nov 12 17:42:12.007712 containerd[2017]: time="2024-11-12T17:42:12.007504850Z" level=info msg="CreateContainer within sandbox \"bae47eee4f060c0818bb66b99ab44e4043f7b5366be583ef49ee39c9b5ffef84\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 17:42:12.030747 containerd[2017]: time="2024-11-12T17:42:12.030653090Z" level=info msg="CreateContainer within sandbox \"bae47eee4f060c0818bb66b99ab44e4043f7b5366be583ef49ee39c9b5ffef84\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653\"" Nov 12 17:42:12.031978 containerd[2017]: time="2024-11-12T17:42:12.031817354Z" level=info msg="StartContainer for \"5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653\"" Nov 12 17:42:12.088437 systemd[1]: Started cri-containerd-5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653.scope - libcontainer container 5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653. Nov 12 17:42:12.137758 containerd[2017]: time="2024-11-12T17:42:12.137681282Z" level=info msg="StartContainer for \"5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653\" returns successfully" Nov 12 17:42:12.816858 kubelet[3479]: I1112 17:42:12.815361 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4kjvb" podStartSLOduration=5.815337138 podStartE2EDuration="5.815337138s" podCreationTimestamp="2024-11-12 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:42:08.809688758 +0000 UTC m=+15.450760362" watchObservedRunningTime="2024-11-12 17:42:12.815337138 +0000 UTC m=+19.456408742" Nov 12 17:42:13.650714 kubelet[3479]: I1112 17:42:13.650586 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-76gpt" podStartSLOduration=2.7146487710000002 podStartE2EDuration="5.64996143s" podCreationTimestamp="2024-11-12 17:42:08 +0000 UTC" firstStartedPulling="2024-11-12 17:42:09.068392643 +0000 UTC m=+15.709464235" lastFinishedPulling="2024-11-12 17:42:12.003705314 +0000 UTC m=+18.644776894" observedRunningTime="2024-11-12 17:42:12.818358078 +0000 UTC m=+19.459429658" watchObservedRunningTime="2024-11-12 17:42:13.64996143 +0000 UTC m=+20.291033094" Nov 12 17:42:17.655436 kubelet[3479]: I1112 17:42:17.655314 3479 topology_manager.go:215] "Topology Admit Handler" podUID="ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33" podNamespace="calico-system" podName="calico-typha-7794f47696-bq2tk" Nov 12 17:42:17.675939 kubelet[3479]: W1112 17:42:17.675052 3479 reflector.go:547] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-24-62" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-62' and this object Nov 12 17:42:17.675939 kubelet[3479]: E1112 17:42:17.675183 3479 reflector.go:150] object-"calico-system"/"tigera-ca-bundle": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ip-172-31-24-62" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-62' and this object Nov 12 17:42:17.675939 kubelet[3479]: W1112 17:42:17.675329 3479 reflector.go:547] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-62" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-62' and this object Nov 12 17:42:17.675939 kubelet[3479]: E1112 17:42:17.675357 3479 reflector.go:150] object-"calico-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-24-62" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-62' and this object Nov 12 17:42:17.675939 kubelet[3479]: W1112 17:42:17.675535 3479 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-24-62" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-62' and this object Nov 12 17:42:17.676401 kubelet[3479]: E1112 17:42:17.675568 3479 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ip-172-31-24-62" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-24-62' and this object Nov 12 17:42:17.682184 systemd[1]: Created slice kubepods-besteffort-podce8f83d0_6b75_46e6_9f8d_d5a0029c3b33.slice - libcontainer container kubepods-besteffort-podce8f83d0_6b75_46e6_9f8d_d5a0029c3b33.slice. Nov 12 17:42:17.695713 kubelet[3479]: I1112 17:42:17.695640 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2d75\" (UniqueName: \"kubernetes.io/projected/ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33-kube-api-access-g2d75\") pod \"calico-typha-7794f47696-bq2tk\" (UID: \"ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33\") " pod="calico-system/calico-typha-7794f47696-bq2tk" Nov 12 17:42:17.695713 kubelet[3479]: I1112 17:42:17.695713 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33-tigera-ca-bundle\") pod \"calico-typha-7794f47696-bq2tk\" (UID: \"ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33\") " pod="calico-system/calico-typha-7794f47696-bq2tk" Nov 12 17:42:17.696039 kubelet[3479]: I1112 17:42:17.695756 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33-typha-certs\") pod \"calico-typha-7794f47696-bq2tk\" (UID: \"ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33\") " pod="calico-system/calico-typha-7794f47696-bq2tk" Nov 12 17:42:17.925670 kubelet[3479]: I1112 17:42:17.922447 3479 topology_manager.go:215] "Topology Admit Handler" podUID="3e89709f-041d-461c-8d19-f32e52eff738" podNamespace="calico-system" podName="calico-node-7hn67" Nov 12 17:42:17.943017 systemd[1]: Created slice kubepods-besteffort-pod3e89709f_041d_461c_8d19_f32e52eff738.slice - libcontainer container kubepods-besteffort-pod3e89709f_041d_461c_8d19_f32e52eff738.slice. Nov 12 17:42:17.997733 kubelet[3479]: I1112 17:42:17.997682 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-policysync\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.998046 kubelet[3479]: I1112 17:42:17.998005 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3e89709f-041d-461c-8d19-f32e52eff738-node-certs\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.998740 kubelet[3479]: I1112 17:42:17.998274 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-cni-bin-dir\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.998740 kubelet[3479]: I1112 17:42:17.998332 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-cni-net-dir\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.998740 kubelet[3479]: I1112 17:42:17.998370 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-cni-log-dir\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.998740 kubelet[3479]: I1112 17:42:17.998413 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-var-lib-calico\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.998740 kubelet[3479]: I1112 17:42:17.998507 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lb99\" (UniqueName: \"kubernetes.io/projected/3e89709f-041d-461c-8d19-f32e52eff738-kube-api-access-6lb99\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.999146 kubelet[3479]: I1112 17:42:17.998561 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-var-run-calico\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.999146 kubelet[3479]: I1112 17:42:17.998617 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3e89709f-041d-461c-8d19-f32e52eff738-tigera-ca-bundle\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.999146 kubelet[3479]: I1112 17:42:17.998659 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-flexvol-driver-host\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.999146 kubelet[3479]: I1112 17:42:17.998731 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-lib-modules\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:17.999146 kubelet[3479]: I1112 17:42:17.998778 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e89709f-041d-461c-8d19-f32e52eff738-xtables-lock\") pod \"calico-node-7hn67\" (UID: \"3e89709f-041d-461c-8d19-f32e52eff738\") " pod="calico-system/calico-node-7hn67" Nov 12 17:42:18.104325 kubelet[3479]: E1112 17:42:18.104259 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.104575 kubelet[3479]: W1112 17:42:18.104327 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.104575 kubelet[3479]: E1112 17:42:18.104436 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.106245 kubelet[3479]: E1112 17:42:18.106173 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.106529 kubelet[3479]: W1112 17:42:18.106447 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.107884 kubelet[3479]: E1112 17:42:18.106609 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.109008 kubelet[3479]: E1112 17:42:18.108820 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.110335 kubelet[3479]: W1112 17:42:18.108982 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.110335 kubelet[3479]: E1112 17:42:18.109977 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.110335 kubelet[3479]: E1112 17:42:18.110280 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.110335 kubelet[3479]: W1112 17:42:18.110335 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.111260 kubelet[3479]: E1112 17:42:18.111167 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.112060 kubelet[3479]: E1112 17:42:18.111986 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.112060 kubelet[3479]: W1112 17:42:18.112039 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.113752 kubelet[3479]: E1112 17:42:18.112125 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.114109 kubelet[3479]: E1112 17:42:18.114074 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.115006 kubelet[3479]: W1112 17:42:18.114319 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.119537 kubelet[3479]: E1112 17:42:18.119325 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.119537 kubelet[3479]: W1112 17:42:18.119408 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.121015 kubelet[3479]: E1112 17:42:18.120878 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.121015 kubelet[3479]: E1112 17:42:18.121005 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.124953 kubelet[3479]: E1112 17:42:18.124138 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.124953 kubelet[3479]: W1112 17:42:18.124197 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.124953 kubelet[3479]: E1112 17:42:18.124622 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.127604 kubelet[3479]: E1112 17:42:18.125137 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.127604 kubelet[3479]: W1112 17:42:18.125168 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.127604 kubelet[3479]: E1112 17:42:18.125537 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.127604 kubelet[3479]: W1112 17:42:18.125562 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.127604 kubelet[3479]: E1112 17:42:18.127152 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.127604 kubelet[3479]: E1112 17:42:18.127217 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.128287 kubelet[3479]: E1112 17:42:18.128232 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.128287 kubelet[3479]: W1112 17:42:18.128277 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.132477 kubelet[3479]: E1112 17:42:18.132104 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.133045 kubelet[3479]: E1112 17:42:18.132953 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.133045 kubelet[3479]: W1112 17:42:18.133010 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.133045 kubelet[3479]: E1112 17:42:18.133094 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.133045 kubelet[3479]: E1112 17:42:18.133419 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.133749 kubelet[3479]: W1112 17:42:18.133437 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.134855 kubelet[3479]: E1112 17:42:18.133871 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.134855 kubelet[3479]: E1112 17:42:18.134529 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.134855 kubelet[3479]: W1112 17:42:18.134567 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.135504 kubelet[3479]: E1112 17:42:18.135073 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.135504 kubelet[3479]: E1112 17:42:18.135465 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.135504 kubelet[3479]: W1112 17:42:18.135487 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.137158 kubelet[3479]: E1112 17:42:18.135997 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.137158 kubelet[3479]: W1112 17:42:18.136031 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.137158 kubelet[3479]: E1112 17:42:18.136502 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.137158 kubelet[3479]: E1112 17:42:18.136607 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.138117 kubelet[3479]: E1112 17:42:18.137797 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.138117 kubelet[3479]: W1112 17:42:18.137836 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.138117 kubelet[3479]: E1112 17:42:18.137871 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.181302 kubelet[3479]: I1112 17:42:18.181102 3479 topology_manager.go:215] "Topology Admit Handler" podUID="272a2aec-8f98-4451-9782-58222f5f8977" podNamespace="calico-system" podName="csi-node-driver-qnp2k" Nov 12 17:42:18.181949 kubelet[3479]: E1112 17:42:18.181570 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:18.203390 kubelet[3479]: E1112 17:42:18.203324 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.203390 kubelet[3479]: W1112 17:42:18.203366 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.204537 kubelet[3479]: E1112 17:42:18.203402 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.206260 kubelet[3479]: E1112 17:42:18.206169 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.206260 kubelet[3479]: W1112 17:42:18.206236 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.207161 kubelet[3479]: E1112 17:42:18.206294 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.207161 kubelet[3479]: E1112 17:42:18.207019 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.207161 kubelet[3479]: W1112 17:42:18.207052 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.207161 kubelet[3479]: E1112 17:42:18.207124 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.210036 kubelet[3479]: E1112 17:42:18.209919 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.210036 kubelet[3479]: W1112 17:42:18.210022 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.210304 kubelet[3479]: E1112 17:42:18.210082 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.211656 kubelet[3479]: E1112 17:42:18.211587 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.211656 kubelet[3479]: W1112 17:42:18.211646 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.212032 kubelet[3479]: E1112 17:42:18.211703 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.274092 kubelet[3479]: E1112 17:42:18.274004 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.274092 kubelet[3479]: W1112 17:42:18.274066 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.274379 kubelet[3479]: E1112 17:42:18.274114 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.274773 kubelet[3479]: E1112 17:42:18.274707 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.274773 kubelet[3479]: W1112 17:42:18.274753 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.275001 kubelet[3479]: E1112 17:42:18.274789 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.275345 kubelet[3479]: E1112 17:42:18.275289 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.275345 kubelet[3479]: W1112 17:42:18.275325 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.275500 kubelet[3479]: E1112 17:42:18.275358 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.276943 kubelet[3479]: E1112 17:42:18.276323 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.276943 kubelet[3479]: W1112 17:42:18.276377 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.276943 kubelet[3479]: E1112 17:42:18.276423 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.277187 kubelet[3479]: E1112 17:42:18.277080 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.277187 kubelet[3479]: W1112 17:42:18.277104 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.277187 kubelet[3479]: E1112 17:42:18.277132 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.278073 kubelet[3479]: E1112 17:42:18.277516 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.278073 kubelet[3479]: W1112 17:42:18.277550 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.278073 kubelet[3479]: E1112 17:42:18.277578 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.283167 kubelet[3479]: E1112 17:42:18.283106 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.283167 kubelet[3479]: W1112 17:42:18.283149 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.283350 kubelet[3479]: E1112 17:42:18.283186 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.284140 kubelet[3479]: E1112 17:42:18.284053 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.284140 kubelet[3479]: W1112 17:42:18.284115 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.284333 kubelet[3479]: E1112 17:42:18.284160 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.286934 kubelet[3479]: E1112 17:42:18.285989 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.286934 kubelet[3479]: W1112 17:42:18.286042 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.286934 kubelet[3479]: E1112 17:42:18.286095 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.286934 kubelet[3479]: E1112 17:42:18.286663 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.286934 kubelet[3479]: W1112 17:42:18.286688 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.286934 kubelet[3479]: E1112 17:42:18.286715 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.287763 kubelet[3479]: E1112 17:42:18.287688 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.287763 kubelet[3479]: W1112 17:42:18.287744 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.288024 kubelet[3479]: E1112 17:42:18.287790 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.288966 kubelet[3479]: E1112 17:42:18.288518 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.288966 kubelet[3479]: W1112 17:42:18.288551 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.288966 kubelet[3479]: E1112 17:42:18.288662 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.290216 kubelet[3479]: E1112 17:42:18.290113 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.290216 kubelet[3479]: W1112 17:42:18.290144 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.290216 kubelet[3479]: E1112 17:42:18.290176 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.290613 kubelet[3479]: E1112 17:42:18.290568 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.290613 kubelet[3479]: W1112 17:42:18.290598 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.291390 kubelet[3479]: E1112 17:42:18.290625 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.291390 kubelet[3479]: E1112 17:42:18.291222 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.291390 kubelet[3479]: W1112 17:42:18.291260 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.291390 kubelet[3479]: E1112 17:42:18.291298 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.294191 kubelet[3479]: E1112 17:42:18.294114 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.294191 kubelet[3479]: W1112 17:42:18.294157 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.294191 kubelet[3479]: E1112 17:42:18.294193 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.295343 kubelet[3479]: E1112 17:42:18.294761 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.295343 kubelet[3479]: W1112 17:42:18.294790 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.295343 kubelet[3479]: E1112 17:42:18.294821 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.295343 kubelet[3479]: E1112 17:42:18.295386 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.295343 kubelet[3479]: W1112 17:42:18.295414 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.295343 kubelet[3479]: E1112 17:42:18.295462 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.297848 kubelet[3479]: E1112 17:42:18.297330 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.297848 kubelet[3479]: W1112 17:42:18.297370 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.297848 kubelet[3479]: E1112 17:42:18.297406 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.297848 kubelet[3479]: E1112 17:42:18.297849 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.298191 kubelet[3479]: W1112 17:42:18.297874 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.298191 kubelet[3479]: E1112 17:42:18.297955 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.313077 kubelet[3479]: E1112 17:42:18.312836 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.313077 kubelet[3479]: W1112 17:42:18.313070 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.313367 kubelet[3479]: E1112 17:42:18.313126 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.313367 kubelet[3479]: I1112 17:42:18.313200 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/272a2aec-8f98-4451-9782-58222f5f8977-kubelet-dir\") pod \"csi-node-driver-qnp2k\" (UID: \"272a2aec-8f98-4451-9782-58222f5f8977\") " pod="calico-system/csi-node-driver-qnp2k" Nov 12 17:42:18.315458 kubelet[3479]: E1112 17:42:18.315354 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.315458 kubelet[3479]: W1112 17:42:18.315388 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.315458 kubelet[3479]: E1112 17:42:18.315440 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.315658 kubelet[3479]: I1112 17:42:18.315500 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/272a2aec-8f98-4451-9782-58222f5f8977-socket-dir\") pod \"csi-node-driver-qnp2k\" (UID: \"272a2aec-8f98-4451-9782-58222f5f8977\") " pod="calico-system/csi-node-driver-qnp2k" Nov 12 17:42:18.316336 kubelet[3479]: E1112 17:42:18.316278 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.316336 kubelet[3479]: W1112 17:42:18.316323 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.316726 kubelet[3479]: E1112 17:42:18.316629 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.318052 kubelet[3479]: E1112 17:42:18.317149 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.318052 kubelet[3479]: W1112 17:42:18.317179 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.318052 kubelet[3479]: E1112 17:42:18.317430 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.318052 kubelet[3479]: E1112 17:42:18.317836 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.318052 kubelet[3479]: W1112 17:42:18.317855 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.318434 kubelet[3479]: E1112 17:42:18.318096 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.318887 kubelet[3479]: E1112 17:42:18.318844 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.318887 kubelet[3479]: W1112 17:42:18.318877 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.319483 kubelet[3479]: E1112 17:42:18.319395 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.320060 kubelet[3479]: E1112 17:42:18.319823 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.320579 kubelet[3479]: W1112 17:42:18.319890 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.320665 kubelet[3479]: E1112 17:42:18.320586 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.321848 kubelet[3479]: E1112 17:42:18.321244 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.321848 kubelet[3479]: W1112 17:42:18.321277 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.321848 kubelet[3479]: E1112 17:42:18.321319 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.321848 kubelet[3479]: I1112 17:42:18.321361 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/272a2aec-8f98-4451-9782-58222f5f8977-registration-dir\") pod \"csi-node-driver-qnp2k\" (UID: \"272a2aec-8f98-4451-9782-58222f5f8977\") " pod="calico-system/csi-node-driver-qnp2k" Nov 12 17:42:18.321848 kubelet[3479]: E1112 17:42:18.321662 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.321848 kubelet[3479]: W1112 17:42:18.321679 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.321848 kubelet[3479]: E1112 17:42:18.321698 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.323778 kubelet[3479]: E1112 17:42:18.323727 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.323778 kubelet[3479]: W1112 17:42:18.323765 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.324033 kubelet[3479]: E1112 17:42:18.323823 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.324461 kubelet[3479]: E1112 17:42:18.324405 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.324461 kubelet[3479]: W1112 17:42:18.324445 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.325040 kubelet[3479]: E1112 17:42:18.324487 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.325528 kubelet[3479]: E1112 17:42:18.325476 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.325528 kubelet[3479]: W1112 17:42:18.325499 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.325746 kubelet[3479]: E1112 17:42:18.325529 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.325746 kubelet[3479]: I1112 17:42:18.325579 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pszgh\" (UniqueName: \"kubernetes.io/projected/272a2aec-8f98-4451-9782-58222f5f8977-kube-api-access-pszgh\") pod \"csi-node-driver-qnp2k\" (UID: \"272a2aec-8f98-4451-9782-58222f5f8977\") " pod="calico-system/csi-node-driver-qnp2k" Nov 12 17:42:18.327425 kubelet[3479]: E1112 17:42:18.327364 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.327425 kubelet[3479]: W1112 17:42:18.327412 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.327612 kubelet[3479]: E1112 17:42:18.327462 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.328669 kubelet[3479]: E1112 17:42:18.328590 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.328669 kubelet[3479]: W1112 17:42:18.328642 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.329093 kubelet[3479]: E1112 17:42:18.328938 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.330785 kubelet[3479]: E1112 17:42:18.329675 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.330785 kubelet[3479]: W1112 17:42:18.329699 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.330785 kubelet[3479]: E1112 17:42:18.329745 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.330785 kubelet[3479]: E1112 17:42:18.330347 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.330785 kubelet[3479]: W1112 17:42:18.330379 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.330785 kubelet[3479]: E1112 17:42:18.330415 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.332708 kubelet[3479]: E1112 17:42:18.332627 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.332708 kubelet[3479]: W1112 17:42:18.332680 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.333001 kubelet[3479]: E1112 17:42:18.332729 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.333001 kubelet[3479]: I1112 17:42:18.332803 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/272a2aec-8f98-4451-9782-58222f5f8977-varrun\") pod \"csi-node-driver-qnp2k\" (UID: \"272a2aec-8f98-4451-9782-58222f5f8977\") " pod="calico-system/csi-node-driver-qnp2k" Nov 12 17:42:18.333663 kubelet[3479]: E1112 17:42:18.333546 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.333663 kubelet[3479]: W1112 17:42:18.333573 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.333663 kubelet[3479]: E1112 17:42:18.333629 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.334189 kubelet[3479]: E1112 17:42:18.334130 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.334189 kubelet[3479]: W1112 17:42:18.334167 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.334378 kubelet[3479]: E1112 17:42:18.334197 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.335664 kubelet[3479]: E1112 17:42:18.335607 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.335664 kubelet[3479]: W1112 17:42:18.335645 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.335874 kubelet[3479]: E1112 17:42:18.335680 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.436067 kubelet[3479]: E1112 17:42:18.435932 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.436067 kubelet[3479]: W1112 17:42:18.435971 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.436067 kubelet[3479]: E1112 17:42:18.436004 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.437767 kubelet[3479]: E1112 17:42:18.437626 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.438492 kubelet[3479]: W1112 17:42:18.437661 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.438492 kubelet[3479]: E1112 17:42:18.437950 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.440091 kubelet[3479]: E1112 17:42:18.439582 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.440091 kubelet[3479]: W1112 17:42:18.439705 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.440091 kubelet[3479]: E1112 17:42:18.439765 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.441966 kubelet[3479]: E1112 17:42:18.441358 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.441966 kubelet[3479]: W1112 17:42:18.441413 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.441966 kubelet[3479]: E1112 17:42:18.441497 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.443383 kubelet[3479]: E1112 17:42:18.442833 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.443383 kubelet[3479]: W1112 17:42:18.442864 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.443383 kubelet[3479]: E1112 17:42:18.443033 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.445055 kubelet[3479]: E1112 17:42:18.444568 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.445055 kubelet[3479]: W1112 17:42:18.444602 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.445055 kubelet[3479]: E1112 17:42:18.445003 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.445441 kubelet[3479]: E1112 17:42:18.445407 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.446280 kubelet[3479]: W1112 17:42:18.445996 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.446280 kubelet[3479]: E1112 17:42:18.446266 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.446651 kubelet[3479]: E1112 17:42:18.446626 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.446771 kubelet[3479]: W1112 17:42:18.446747 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.446986 kubelet[3479]: E1112 17:42:18.446950 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.447606 kubelet[3479]: E1112 17:42:18.447486 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.447606 kubelet[3479]: W1112 17:42:18.447520 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.448283 kubelet[3479]: E1112 17:42:18.448081 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.448816 kubelet[3479]: E1112 17:42:18.448605 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.448816 kubelet[3479]: W1112 17:42:18.448628 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.448816 kubelet[3479]: E1112 17:42:18.448680 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.449186 kubelet[3479]: E1112 17:42:18.449161 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.449289 kubelet[3479]: W1112 17:42:18.449265 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.449484 kubelet[3479]: E1112 17:42:18.449440 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.450266 kubelet[3479]: E1112 17:42:18.450214 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.450512 kubelet[3479]: W1112 17:42:18.450468 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.450773 kubelet[3479]: E1112 17:42:18.450681 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.451470 kubelet[3479]: E1112 17:42:18.451424 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.452573 kubelet[3479]: W1112 17:42:18.451573 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.452573 kubelet[3479]: E1112 17:42:18.452209 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.453345 kubelet[3479]: E1112 17:42:18.453310 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.453995 kubelet[3479]: W1112 17:42:18.453504 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.454316 kubelet[3479]: E1112 17:42:18.454268 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.454580 kubelet[3479]: E1112 17:42:18.454552 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.454803 kubelet[3479]: W1112 17:42:18.454680 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.455094 kubelet[3479]: E1112 17:42:18.454979 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.456252 kubelet[3479]: E1112 17:42:18.455840 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.456252 kubelet[3479]: W1112 17:42:18.455886 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.456594 kubelet[3479]: E1112 17:42:18.456558 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.457303 kubelet[3479]: E1112 17:42:18.457132 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.457303 kubelet[3479]: W1112 17:42:18.457167 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.457303 kubelet[3479]: E1112 17:42:18.457199 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.458587 kubelet[3479]: E1112 17:42:18.458165 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.458587 kubelet[3479]: W1112 17:42:18.458207 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.459627 kubelet[3479]: E1112 17:42:18.458982 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.459627 kubelet[3479]: W1112 17:42:18.459101 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.459627 kubelet[3479]: E1112 17:42:18.459146 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.461027 kubelet[3479]: E1112 17:42:18.459353 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.461027 kubelet[3479]: E1112 17:42:18.460340 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.461027 kubelet[3479]: W1112 17:42:18.460669 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.461027 kubelet[3479]: E1112 17:42:18.460945 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.462584 kubelet[3479]: E1112 17:42:18.462299 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.462584 kubelet[3479]: W1112 17:42:18.462337 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.462584 kubelet[3479]: E1112 17:42:18.462415 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.463941 kubelet[3479]: E1112 17:42:18.463769 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.464972 kubelet[3479]: W1112 17:42:18.464672 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.465233 kubelet[3479]: E1112 17:42:18.465159 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.466219 kubelet[3479]: E1112 17:42:18.466012 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.466219 kubelet[3479]: W1112 17:42:18.466043 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.466219 kubelet[3479]: E1112 17:42:18.466097 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.468263 kubelet[3479]: E1112 17:42:18.468224 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.468764 kubelet[3479]: W1112 17:42:18.468422 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.468994 kubelet[3479]: E1112 17:42:18.468784 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.470140 kubelet[3479]: E1112 17:42:18.469861 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.470140 kubelet[3479]: W1112 17:42:18.469926 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.470140 kubelet[3479]: E1112 17:42:18.469995 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.471180 kubelet[3479]: E1112 17:42:18.470825 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.471180 kubelet[3479]: W1112 17:42:18.470872 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.471180 kubelet[3479]: E1112 17:42:18.470987 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.471557 kubelet[3479]: E1112 17:42:18.471464 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.471557 kubelet[3479]: W1112 17:42:18.471495 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.471557 kubelet[3479]: E1112 17:42:18.471537 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.473250 kubelet[3479]: E1112 17:42:18.473173 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.473250 kubelet[3479]: W1112 17:42:18.473227 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.473478 kubelet[3479]: E1112 17:42:18.473274 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.474088 kubelet[3479]: E1112 17:42:18.474037 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.474088 kubelet[3479]: W1112 17:42:18.474075 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.474278 kubelet[3479]: E1112 17:42:18.474108 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.475017 kubelet[3479]: E1112 17:42:18.474731 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.475017 kubelet[3479]: W1112 17:42:18.474780 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.475017 kubelet[3479]: E1112 17:42:18.474818 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.567997 kubelet[3479]: E1112 17:42:18.567940 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.568652 kubelet[3479]: W1112 17:42:18.568292 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.568652 kubelet[3479]: E1112 17:42:18.568365 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.569428 kubelet[3479]: E1112 17:42:18.569323 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.569708 kubelet[3479]: W1112 17:42:18.569544 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.570049 kubelet[3479]: E1112 17:42:18.569805 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.570277 kubelet[3479]: E1112 17:42:18.570253 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.570444 kubelet[3479]: W1112 17:42:18.570408 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.570582 kubelet[3479]: E1112 17:42:18.570549 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.571702 kubelet[3479]: E1112 17:42:18.571646 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.572132 kubelet[3479]: W1112 17:42:18.571847 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.572132 kubelet[3479]: E1112 17:42:18.571886 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.572503 kubelet[3479]: E1112 17:42:18.572460 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.572612 kubelet[3479]: W1112 17:42:18.572589 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.572729 kubelet[3479]: E1112 17:42:18.572706 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.574041 kubelet[3479]: E1112 17:42:18.573969 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.574319 kubelet[3479]: W1112 17:42:18.574183 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.574319 kubelet[3479]: E1112 17:42:18.574217 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.632890 kubelet[3479]: E1112 17:42:18.632832 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.632890 kubelet[3479]: W1112 17:42:18.632870 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.633129 kubelet[3479]: E1112 17:42:18.632935 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.634229 kubelet[3479]: E1112 17:42:18.634181 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.634229 kubelet[3479]: W1112 17:42:18.634216 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.634439 kubelet[3479]: E1112 17:42:18.634248 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.676840 kubelet[3479]: E1112 17:42:18.676518 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.676840 kubelet[3479]: W1112 17:42:18.676571 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.676840 kubelet[3479]: E1112 17:42:18.676619 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.678506 kubelet[3479]: E1112 17:42:18.677930 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.678506 kubelet[3479]: W1112 17:42:18.677962 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.678506 kubelet[3479]: E1112 17:42:18.677989 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.678506 kubelet[3479]: E1112 17:42:18.678310 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.678506 kubelet[3479]: W1112 17:42:18.678326 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.678506 kubelet[3479]: E1112 17:42:18.678347 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.679304 kubelet[3479]: E1112 17:42:18.679186 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.679304 kubelet[3479]: W1112 17:42:18.679211 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.679304 kubelet[3479]: E1112 17:42:18.679238 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.780462 kubelet[3479]: E1112 17:42:18.780309 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.780462 kubelet[3479]: W1112 17:42:18.780346 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.780462 kubelet[3479]: E1112 17:42:18.780383 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.781757 kubelet[3479]: E1112 17:42:18.781388 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.781757 kubelet[3479]: W1112 17:42:18.781420 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.781757 kubelet[3479]: E1112 17:42:18.781450 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.783378 kubelet[3479]: E1112 17:42:18.783048 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.783378 kubelet[3479]: W1112 17:42:18.783096 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.783378 kubelet[3479]: E1112 17:42:18.783126 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.784834 kubelet[3479]: E1112 17:42:18.784481 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.784834 kubelet[3479]: W1112 17:42:18.784526 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.784834 kubelet[3479]: E1112 17:42:18.784565 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.798119 kubelet[3479]: E1112 17:42:18.797654 3479 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Nov 12 17:42:18.798119 kubelet[3479]: E1112 17:42:18.797780 3479 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33-typha-certs podName:ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33 nodeName:}" failed. No retries permitted until 2024-11-12 17:42:19.297749631 +0000 UTC m=+25.938821211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33-typha-certs") pod "calico-typha-7794f47696-bq2tk" (UID: "ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33") : failed to sync secret cache: timed out waiting for the condition Nov 12 17:42:18.828007 kubelet[3479]: E1112 17:42:18.827947 3479 projected.go:294] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Nov 12 17:42:18.828571 kubelet[3479]: E1112 17:42:18.828316 3479 projected.go:200] Error preparing data for projected volume kube-api-access-g2d75 for pod calico-system/calico-typha-7794f47696-bq2tk: failed to sync configmap cache: timed out waiting for the condition Nov 12 17:42:18.828571 kubelet[3479]: E1112 17:42:18.828505 3479 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33-kube-api-access-g2d75 podName:ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33 nodeName:}" failed. No retries permitted until 2024-11-12 17:42:19.328463824 +0000 UTC m=+25.969535416 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-g2d75" (UniqueName: "kubernetes.io/projected/ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33-kube-api-access-g2d75") pod "calico-typha-7794f47696-bq2tk" (UID: "ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33") : failed to sync configmap cache: timed out waiting for the condition Nov 12 17:42:18.885837 kubelet[3479]: E1112 17:42:18.885775 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.885837 kubelet[3479]: W1112 17:42:18.885828 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.886142 kubelet[3479]: E1112 17:42:18.885873 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.886515 kubelet[3479]: E1112 17:42:18.886475 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.886613 kubelet[3479]: W1112 17:42:18.886514 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.886613 kubelet[3479]: E1112 17:42:18.886547 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.887163 kubelet[3479]: E1112 17:42:18.887114 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.887163 kubelet[3479]: W1112 17:42:18.887158 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.887487 kubelet[3479]: E1112 17:42:18.887194 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.887774 kubelet[3479]: E1112 17:42:18.887716 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.887774 kubelet[3479]: W1112 17:42:18.887755 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.887956 kubelet[3479]: E1112 17:42:18.887792 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.989673 kubelet[3479]: E1112 17:42:18.989385 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.989673 kubelet[3479]: W1112 17:42:18.989424 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.989673 kubelet[3479]: E1112 17:42:18.989457 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.990203 kubelet[3479]: E1112 17:42:18.990056 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.990203 kubelet[3479]: W1112 17:42:18.990093 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.990203 kubelet[3479]: E1112 17:42:18.990129 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.990677 kubelet[3479]: E1112 17:42:18.990643 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.990677 kubelet[3479]: W1112 17:42:18.990676 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.990887 kubelet[3479]: E1112 17:42:18.990706 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:18.991186 kubelet[3479]: E1112 17:42:18.991154 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:18.991271 kubelet[3479]: W1112 17:42:18.991185 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:18.991271 kubelet[3479]: E1112 17:42:18.991214 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.093490 kubelet[3479]: E1112 17:42:19.093427 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.093490 kubelet[3479]: W1112 17:42:19.093474 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.093775 kubelet[3479]: E1112 17:42:19.093510 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.094177 kubelet[3479]: E1112 17:42:19.094060 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.094177 kubelet[3479]: W1112 17:42:19.094162 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.094443 kubelet[3479]: E1112 17:42:19.094209 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.095429 kubelet[3479]: E1112 17:42:19.095333 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.095429 kubelet[3479]: W1112 17:42:19.095410 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.095813 kubelet[3479]: E1112 17:42:19.095479 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.096099 kubelet[3479]: E1112 17:42:19.096065 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.096186 kubelet[3479]: W1112 17:42:19.096102 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.096186 kubelet[3479]: E1112 17:42:19.096134 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.158688 kubelet[3479]: E1112 17:42:19.158217 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.158688 kubelet[3479]: W1112 17:42:19.158306 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.158688 kubelet[3479]: E1112 17:42:19.158381 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.174299 kubelet[3479]: E1112 17:42:19.174219 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.174299 kubelet[3479]: W1112 17:42:19.174274 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.174529 kubelet[3479]: E1112 17:42:19.174319 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.197605 kubelet[3479]: E1112 17:42:19.197542 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.197789 kubelet[3479]: W1112 17:42:19.197610 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.197789 kubelet[3479]: E1112 17:42:19.197649 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.198360 kubelet[3479]: E1112 17:42:19.198320 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.198360 kubelet[3479]: W1112 17:42:19.198360 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.198606 kubelet[3479]: E1112 17:42:19.198394 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.300567 kubelet[3479]: E1112 17:42:19.300488 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.300567 kubelet[3479]: W1112 17:42:19.300545 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.301223 kubelet[3479]: E1112 17:42:19.300594 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.301223 kubelet[3479]: E1112 17:42:19.301102 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.301223 kubelet[3479]: W1112 17:42:19.301125 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.301223 kubelet[3479]: E1112 17:42:19.301172 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.302793 kubelet[3479]: E1112 17:42:19.301584 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.302793 kubelet[3479]: W1112 17:42:19.301615 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.302793 kubelet[3479]: E1112 17:42:19.301655 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.302793 kubelet[3479]: E1112 17:42:19.302262 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.302793 kubelet[3479]: W1112 17:42:19.302292 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.302793 kubelet[3479]: E1112 17:42:19.302324 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.302793 kubelet[3479]: E1112 17:42:19.302754 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.302793 kubelet[3479]: W1112 17:42:19.302777 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.302793 kubelet[3479]: E1112 17:42:19.302803 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.303409 kubelet[3479]: E1112 17:42:19.303242 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.303409 kubelet[3479]: W1112 17:42:19.303262 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.303409 kubelet[3479]: E1112 17:42:19.303286 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.316402 kubelet[3479]: E1112 17:42:19.316165 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.316402 kubelet[3479]: W1112 17:42:19.316215 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.316402 kubelet[3479]: E1112 17:42:19.316258 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.402610 kubelet[3479]: E1112 17:42:19.402438 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.402610 kubelet[3479]: W1112 17:42:19.402479 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.402610 kubelet[3479]: E1112 17:42:19.402516 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.404961 kubelet[3479]: E1112 17:42:19.403201 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.404961 kubelet[3479]: W1112 17:42:19.403250 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.404961 kubelet[3479]: E1112 17:42:19.403358 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.404961 kubelet[3479]: E1112 17:42:19.404083 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.404961 kubelet[3479]: W1112 17:42:19.404109 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.404961 kubelet[3479]: E1112 17:42:19.404136 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.404961 kubelet[3479]: E1112 17:42:19.404959 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.405605 kubelet[3479]: W1112 17:42:19.404989 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.405605 kubelet[3479]: E1112 17:42:19.405022 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.405605 kubelet[3479]: E1112 17:42:19.405599 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.405768 kubelet[3479]: W1112 17:42:19.405622 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.405768 kubelet[3479]: E1112 17:42:19.405705 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.417440 kubelet[3479]: E1112 17:42:19.417286 3479 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 17:42:19.417440 kubelet[3479]: W1112 17:42:19.417322 3479 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 17:42:19.417440 kubelet[3479]: E1112 17:42:19.417358 3479 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 17:42:19.453188 containerd[2017]: time="2024-11-12T17:42:19.453116843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hn67,Uid:3e89709f-041d-461c-8d19-f32e52eff738,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:19.494810 containerd[2017]: time="2024-11-12T17:42:19.492445799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7794f47696-bq2tk,Uid:ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:19.520483 containerd[2017]: time="2024-11-12T17:42:19.520206059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:19.520483 containerd[2017]: time="2024-11-12T17:42:19.520350167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:19.520483 containerd[2017]: time="2024-11-12T17:42:19.520389143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:19.521289 containerd[2017]: time="2024-11-12T17:42:19.520803875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:19.576234 systemd[1]: Started cri-containerd-34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84.scope - libcontainer container 34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84. Nov 12 17:42:19.614747 containerd[2017]: time="2024-11-12T17:42:19.611847444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:19.616016 containerd[2017]: time="2024-11-12T17:42:19.615816096Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:19.616353 containerd[2017]: time="2024-11-12T17:42:19.616026960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:19.617289 containerd[2017]: time="2024-11-12T17:42:19.616435824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:19.625027 kubelet[3479]: E1112 17:42:19.623527 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:19.711623 systemd[1]: Started cri-containerd-1156c941d7c30f4309f57439424aa3bf3229ba2e40eda02d1031e7aa752bdc3b.scope - libcontainer container 1156c941d7c30f4309f57439424aa3bf3229ba2e40eda02d1031e7aa752bdc3b. Nov 12 17:42:19.772589 containerd[2017]: time="2024-11-12T17:42:19.771644280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hn67,Uid:3e89709f-041d-461c-8d19-f32e52eff738,Namespace:calico-system,Attempt:0,} returns sandbox id \"34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84\"" Nov 12 17:42:19.778749 containerd[2017]: time="2024-11-12T17:42:19.777848316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 17:42:19.891151 containerd[2017]: time="2024-11-12T17:42:19.891083665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7794f47696-bq2tk,Uid:ce8f83d0-6b75-46e6-9f8d-d5a0029c3b33,Namespace:calico-system,Attempt:0,} returns sandbox id \"1156c941d7c30f4309f57439424aa3bf3229ba2e40eda02d1031e7aa752bdc3b\"" Nov 12 17:42:21.126948 containerd[2017]: time="2024-11-12T17:42:21.124943075Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:21.129799 containerd[2017]: time="2024-11-12T17:42:21.129716219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5117816" Nov 12 17:42:21.132033 containerd[2017]: time="2024-11-12T17:42:21.131890175Z" level=info msg="ImageCreate event name:\"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:21.138750 containerd[2017]: time="2024-11-12T17:42:21.138595007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:21.140797 containerd[2017]: time="2024-11-12T17:42:21.140465831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6487412\" in 1.361274475s" Nov 12 17:42:21.140797 containerd[2017]: time="2024-11-12T17:42:21.140543243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:bd15f6fc4f6c943c0f50373a7141cb17e8f12e21aaad47c24b6667c3f1c9947e\"" Nov 12 17:42:21.148056 containerd[2017]: time="2024-11-12T17:42:21.147341939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 17:42:21.151818 containerd[2017]: time="2024-11-12T17:42:21.151492463Z" level=info msg="CreateContainer within sandbox \"34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 17:42:21.196212 containerd[2017]: time="2024-11-12T17:42:21.194990975Z" level=info msg="CreateContainer within sandbox \"34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77\"" Nov 12 17:42:21.201563 containerd[2017]: time="2024-11-12T17:42:21.201466931Z" level=info msg="StartContainer for \"a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77\"" Nov 12 17:42:21.312240 systemd[1]: Started cri-containerd-a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77.scope - libcontainer container a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77. Nov 12 17:42:21.393718 containerd[2017]: time="2024-11-12T17:42:21.393195372Z" level=info msg="StartContainer for \"a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77\" returns successfully" Nov 12 17:42:21.444674 systemd[1]: cri-containerd-a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77.scope: Deactivated successfully. Nov 12 17:42:21.547965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77-rootfs.mount: Deactivated successfully. Nov 12 17:42:21.622033 kubelet[3479]: E1112 17:42:21.621241 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:21.654461 containerd[2017]: time="2024-11-12T17:42:21.653495594Z" level=info msg="shim disconnected" id=a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77 namespace=k8s.io Nov 12 17:42:21.654461 containerd[2017]: time="2024-11-12T17:42:21.653573846Z" level=warning msg="cleaning up after shim disconnected" id=a5d139d19dd420b6f3cac6058c3929e8528e136711347b171cd7eee955687d77 namespace=k8s.io Nov 12 17:42:21.654461 containerd[2017]: time="2024-11-12T17:42:21.653594306Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:42:23.087467 containerd[2017]: time="2024-11-12T17:42:23.087355165Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:23.090179 containerd[2017]: time="2024-11-12T17:42:23.090096145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=27849584" Nov 12 17:42:23.090960 containerd[2017]: time="2024-11-12T17:42:23.090637813Z" level=info msg="ImageCreate event name:\"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:23.095473 containerd[2017]: time="2024-11-12T17:42:23.095338573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:23.097506 containerd[2017]: time="2024-11-12T17:42:23.097276993Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"29219212\" in 1.949869954s" Nov 12 17:42:23.097506 containerd[2017]: time="2024-11-12T17:42:23.097352293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:b2bb88f3f42552b429baa4766d841334e258ac314fd6372cf3b9700487183ad3\"" Nov 12 17:42:23.100227 containerd[2017]: time="2024-11-12T17:42:23.100153597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 17:42:23.137303 containerd[2017]: time="2024-11-12T17:42:23.136984525Z" level=info msg="CreateContainer within sandbox \"1156c941d7c30f4309f57439424aa3bf3229ba2e40eda02d1031e7aa752bdc3b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 17:42:23.156816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634778244.mount: Deactivated successfully. Nov 12 17:42:23.160620 containerd[2017]: time="2024-11-12T17:42:23.160521337Z" level=info msg="CreateContainer within sandbox \"1156c941d7c30f4309f57439424aa3bf3229ba2e40eda02d1031e7aa752bdc3b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cc47f1d77666c19567c09ab5e74c9c411c0547288594d34d046f4446d064244e\"" Nov 12 17:42:23.163855 containerd[2017]: time="2024-11-12T17:42:23.162726169Z" level=info msg="StartContainer for \"cc47f1d77666c19567c09ab5e74c9c411c0547288594d34d046f4446d064244e\"" Nov 12 17:42:23.218309 systemd[1]: Started cri-containerd-cc47f1d77666c19567c09ab5e74c9c411c0547288594d34d046f4446d064244e.scope - libcontainer container cc47f1d77666c19567c09ab5e74c9c411c0547288594d34d046f4446d064244e. Nov 12 17:42:23.300562 containerd[2017]: time="2024-11-12T17:42:23.300209666Z" level=info msg="StartContainer for \"cc47f1d77666c19567c09ab5e74c9c411c0547288594d34d046f4446d064244e\" returns successfully" Nov 12 17:42:23.623229 kubelet[3479]: E1112 17:42:23.622164 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:24.864711 kubelet[3479]: I1112 17:42:24.864543 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:42:25.621323 kubelet[3479]: E1112 17:42:25.621182 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:27.308009 containerd[2017]: time="2024-11-12T17:42:27.307770486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:27.310007 containerd[2017]: time="2024-11-12T17:42:27.309946758Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=89700517" Nov 12 17:42:27.311401 containerd[2017]: time="2024-11-12T17:42:27.311251578Z" level=info msg="ImageCreate event name:\"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:27.317299 containerd[2017]: time="2024-11-12T17:42:27.317198766Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:27.319409 containerd[2017]: time="2024-11-12T17:42:27.319213458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"91070153\" in 4.218989097s" Nov 12 17:42:27.319409 containerd[2017]: time="2024-11-12T17:42:27.319270182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:9c7b7d79ea478f25cd5de34ec1519a0aaa7ac440910e61075e65092a94aea41f\"" Nov 12 17:42:27.326117 containerd[2017]: time="2024-11-12T17:42:27.326028810Z" level=info msg="CreateContainer within sandbox \"34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 17:42:27.352627 containerd[2017]: time="2024-11-12T17:42:27.352075482Z" level=info msg="CreateContainer within sandbox \"34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406\"" Nov 12 17:42:27.355457 containerd[2017]: time="2024-11-12T17:42:27.353654298Z" level=info msg="StartContainer for \"62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406\"" Nov 12 17:42:27.440223 systemd[1]: Started cri-containerd-62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406.scope - libcontainer container 62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406. Nov 12 17:42:27.517048 containerd[2017]: time="2024-11-12T17:42:27.516788239Z" level=info msg="StartContainer for \"62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406\" returns successfully" Nov 12 17:42:27.625861 kubelet[3479]: E1112 17:42:27.625156 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:27.929561 kubelet[3479]: I1112 17:42:27.928447 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7794f47696-bq2tk" podStartSLOduration=7.724463741 podStartE2EDuration="10.928420269s" podCreationTimestamp="2024-11-12 17:42:17 +0000 UTC" firstStartedPulling="2024-11-12 17:42:19.894871417 +0000 UTC m=+26.535942997" lastFinishedPulling="2024-11-12 17:42:23.098827945 +0000 UTC m=+29.739899525" observedRunningTime="2024-11-12 17:42:23.887405441 +0000 UTC m=+30.528477093" watchObservedRunningTime="2024-11-12 17:42:27.928420269 +0000 UTC m=+34.569491861" Nov 12 17:42:28.508316 containerd[2017]: time="2024-11-12T17:42:28.508244588Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 17:42:28.513778 systemd[1]: cri-containerd-62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406.scope: Deactivated successfully. Nov 12 17:42:28.515255 systemd[1]: cri-containerd-62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406.scope: Consumed 1.027s CPU time. Nov 12 17:42:28.554720 kubelet[3479]: I1112 17:42:28.554171 3479 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 17:42:28.595644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406-rootfs.mount: Deactivated successfully. Nov 12 17:42:28.650816 kubelet[3479]: I1112 17:42:28.650709 3479 topology_manager.go:215] "Topology Admit Handler" podUID="2856c5dd-06c7-4491-84ce-4e60da83a6ac" podNamespace="kube-system" podName="coredns-7db6d8ff4d-4twwd" Nov 12 17:42:28.694296 systemd[1]: Created slice kubepods-burstable-pod2856c5dd_06c7_4491_84ce_4e60da83a6ac.slice - libcontainer container kubepods-burstable-pod2856c5dd_06c7_4491_84ce_4e60da83a6ac.slice. Nov 12 17:42:28.702717 kubelet[3479]: I1112 17:42:28.702633 3479 topology_manager.go:215] "Topology Admit Handler" podUID="9324c64d-81ba-42f9-be40-b1e24856c873" podNamespace="calico-system" podName="calico-kube-controllers-6b45568475-v2mpw" Nov 12 17:42:28.714955 kubelet[3479]: I1112 17:42:28.714029 3479 topology_manager.go:215] "Topology Admit Handler" podUID="3c76a9e4-323d-472f-a6f5-2dfa31d17b05" podNamespace="calico-apiserver" podName="calico-apiserver-94db85cfb-bhh4t" Nov 12 17:42:28.736693 kubelet[3479]: I1112 17:42:28.735428 3479 topology_manager.go:215] "Topology Admit Handler" podUID="e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lczwt" Nov 12 17:42:28.736693 kubelet[3479]: I1112 17:42:28.736549 3479 topology_manager.go:215] "Topology Admit Handler" podUID="1f519653-0fdb-4b0c-9b62-0fc9a35c6511" podNamespace="calico-apiserver" podName="calico-apiserver-94db85cfb-5zcvt" Nov 12 17:42:28.744623 systemd[1]: Created slice kubepods-besteffort-pod9324c64d_81ba_42f9_be40_b1e24856c873.slice - libcontainer container kubepods-besteffort-pod9324c64d_81ba_42f9_be40_b1e24856c873.slice. Nov 12 17:42:28.784888 systemd[1]: Created slice kubepods-besteffort-pod3c76a9e4_323d_472f_a6f5_2dfa31d17b05.slice - libcontainer container kubepods-besteffort-pod3c76a9e4_323d_472f_a6f5_2dfa31d17b05.slice. Nov 12 17:42:28.790315 kubelet[3479]: I1112 17:42:28.787677 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2856c5dd-06c7-4491-84ce-4e60da83a6ac-config-volume\") pod \"coredns-7db6d8ff4d-4twwd\" (UID: \"2856c5dd-06c7-4491-84ce-4e60da83a6ac\") " pod="kube-system/coredns-7db6d8ff4d-4twwd" Nov 12 17:42:28.790315 kubelet[3479]: I1112 17:42:28.787744 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tb4\" (UniqueName: \"kubernetes.io/projected/3c76a9e4-323d-472f-a6f5-2dfa31d17b05-kube-api-access-58tb4\") pod \"calico-apiserver-94db85cfb-bhh4t\" (UID: \"3c76a9e4-323d-472f-a6f5-2dfa31d17b05\") " pod="calico-apiserver/calico-apiserver-94db85cfb-bhh4t" Nov 12 17:42:28.790315 kubelet[3479]: I1112 17:42:28.787793 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj592\" (UniqueName: \"kubernetes.io/projected/2856c5dd-06c7-4491-84ce-4e60da83a6ac-kube-api-access-qj592\") pod \"coredns-7db6d8ff4d-4twwd\" (UID: \"2856c5dd-06c7-4491-84ce-4e60da83a6ac\") " pod="kube-system/coredns-7db6d8ff4d-4twwd" Nov 12 17:42:28.790315 kubelet[3479]: I1112 17:42:28.787842 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9324c64d-81ba-42f9-be40-b1e24856c873-tigera-ca-bundle\") pod \"calico-kube-controllers-6b45568475-v2mpw\" (UID: \"9324c64d-81ba-42f9-be40-b1e24856c873\") " pod="calico-system/calico-kube-controllers-6b45568475-v2mpw" Nov 12 17:42:28.793342 kubelet[3479]: I1112 17:42:28.787884 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c76a9e4-323d-472f-a6f5-2dfa31d17b05-calico-apiserver-certs\") pod \"calico-apiserver-94db85cfb-bhh4t\" (UID: \"3c76a9e4-323d-472f-a6f5-2dfa31d17b05\") " pod="calico-apiserver/calico-apiserver-94db85cfb-bhh4t" Nov 12 17:42:28.793342 kubelet[3479]: I1112 17:42:28.793124 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9mmh\" (UniqueName: \"kubernetes.io/projected/9324c64d-81ba-42f9-be40-b1e24856c873-kube-api-access-j9mmh\") pod \"calico-kube-controllers-6b45568475-v2mpw\" (UID: \"9324c64d-81ba-42f9-be40-b1e24856c873\") " pod="calico-system/calico-kube-controllers-6b45568475-v2mpw" Nov 12 17:42:28.817485 systemd[1]: Created slice kubepods-burstable-pode6b92c0e_f7a3_46f3_a24e_5d039c6a77d5.slice - libcontainer container kubepods-burstable-pode6b92c0e_f7a3_46f3_a24e_5d039c6a77d5.slice. Nov 12 17:42:28.851085 systemd[1]: Created slice kubepods-besteffort-pod1f519653_0fdb_4b0c_9b62_0fc9a35c6511.slice - libcontainer container kubepods-besteffort-pod1f519653_0fdb_4b0c_9b62_0fc9a35c6511.slice. Nov 12 17:42:28.897305 kubelet[3479]: I1112 17:42:28.896547 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xln25\" (UniqueName: \"kubernetes.io/projected/1f519653-0fdb-4b0c-9b62-0fc9a35c6511-kube-api-access-xln25\") pod \"calico-apiserver-94db85cfb-5zcvt\" (UID: \"1f519653-0fdb-4b0c-9b62-0fc9a35c6511\") " pod="calico-apiserver/calico-apiserver-94db85cfb-5zcvt" Nov 12 17:42:28.897305 kubelet[3479]: I1112 17:42:28.896739 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5-config-volume\") pod \"coredns-7db6d8ff4d-lczwt\" (UID: \"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5\") " pod="kube-system/coredns-7db6d8ff4d-lczwt" Nov 12 17:42:28.897305 kubelet[3479]: I1112 17:42:28.896839 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f519653-0fdb-4b0c-9b62-0fc9a35c6511-calico-apiserver-certs\") pod \"calico-apiserver-94db85cfb-5zcvt\" (UID: \"1f519653-0fdb-4b0c-9b62-0fc9a35c6511\") " pod="calico-apiserver/calico-apiserver-94db85cfb-5zcvt" Nov 12 17:42:28.897305 kubelet[3479]: I1112 17:42:28.896886 3479 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zslpv\" (UniqueName: \"kubernetes.io/projected/e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5-kube-api-access-zslpv\") pod \"coredns-7db6d8ff4d-lczwt\" (UID: \"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5\") " pod="kube-system/coredns-7db6d8ff4d-lczwt" Nov 12 17:42:29.034016 containerd[2017]: time="2024-11-12T17:42:29.033798582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4twwd,Uid:2856c5dd-06c7-4491-84ce-4e60da83a6ac,Namespace:kube-system,Attempt:0,}" Nov 12 17:42:29.066367 containerd[2017]: time="2024-11-12T17:42:29.066291678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b45568475-v2mpw,Uid:9324c64d-81ba-42f9-be40-b1e24856c873,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:29.099562 containerd[2017]: time="2024-11-12T17:42:29.099500287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-bhh4t,Uid:3c76a9e4-323d-472f-a6f5-2dfa31d17b05,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:42:29.146555 containerd[2017]: time="2024-11-12T17:42:29.146193919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lczwt,Uid:e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5,Namespace:kube-system,Attempt:0,}" Nov 12 17:42:29.170119 containerd[2017]: time="2024-11-12T17:42:29.169823551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-5zcvt,Uid:1f519653-0fdb-4b0c-9b62-0fc9a35c6511,Namespace:calico-apiserver,Attempt:0,}" Nov 12 17:42:29.635681 systemd[1]: Created slice kubepods-besteffort-pod272a2aec_8f98_4451_9782_58222f5f8977.slice - libcontainer container kubepods-besteffort-pod272a2aec_8f98_4451_9782_58222f5f8977.slice. Nov 12 17:42:29.644520 containerd[2017]: time="2024-11-12T17:42:29.644429373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnp2k,Uid:272a2aec-8f98-4451-9782-58222f5f8977,Namespace:calico-system,Attempt:0,}" Nov 12 17:42:29.786760 containerd[2017]: time="2024-11-12T17:42:29.786381838Z" level=info msg="shim disconnected" id=62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406 namespace=k8s.io Nov 12 17:42:29.786760 containerd[2017]: time="2024-11-12T17:42:29.786449386Z" level=warning msg="cleaning up after shim disconnected" id=62bd93717b6d6640752220fc5ffa1ea11adcb78031975b2ebb8358a623a70406 namespace=k8s.io Nov 12 17:42:29.786760 containerd[2017]: time="2024-11-12T17:42:29.786469354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:42:29.843411 containerd[2017]: time="2024-11-12T17:42:29.843331102Z" level=warning msg="cleanup warnings time=\"2024-11-12T17:42:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 17:42:29.937405 containerd[2017]: time="2024-11-12T17:42:29.936398267Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 17:42:30.337024 containerd[2017]: time="2024-11-12T17:42:30.336865209Z" level=error msg="Failed to destroy network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.339313 containerd[2017]: time="2024-11-12T17:42:30.339112773Z" level=error msg="encountered an error cleaning up failed sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.340495 containerd[2017]: time="2024-11-12T17:42:30.340115985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-bhh4t,Uid:3c76a9e4-323d-472f-a6f5-2dfa31d17b05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.341592 kubelet[3479]: E1112 17:42:30.341513 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.343751 kubelet[3479]: E1112 17:42:30.343059 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94db85cfb-bhh4t" Nov 12 17:42:30.343751 kubelet[3479]: E1112 17:42:30.343122 3479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94db85cfb-bhh4t" Nov 12 17:42:30.343751 kubelet[3479]: E1112 17:42:30.343218 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94db85cfb-bhh4t_calico-apiserver(3c76a9e4-323d-472f-a6f5-2dfa31d17b05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94db85cfb-bhh4t_calico-apiserver(3c76a9e4-323d-472f-a6f5-2dfa31d17b05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94db85cfb-bhh4t" podUID="3c76a9e4-323d-472f-a6f5-2dfa31d17b05" Nov 12 17:42:30.382645 containerd[2017]: time="2024-11-12T17:42:30.382324797Z" level=error msg="Failed to destroy network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.385127 containerd[2017]: time="2024-11-12T17:42:30.384708465Z" level=error msg="encountered an error cleaning up failed sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.385127 containerd[2017]: time="2024-11-12T17:42:30.384927945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lczwt,Uid:e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.385782 kubelet[3479]: E1112 17:42:30.385486 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.385782 kubelet[3479]: E1112 17:42:30.385608 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lczwt" Nov 12 17:42:30.385782 kubelet[3479]: E1112 17:42:30.385652 3479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-lczwt" Nov 12 17:42:30.390824 kubelet[3479]: E1112 17:42:30.385748 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-lczwt_kube-system(e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-lczwt_kube-system(e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lczwt" podUID="e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5" Nov 12 17:42:30.394645 containerd[2017]: time="2024-11-12T17:42:30.394020117Z" level=error msg="Failed to destroy network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.398265 containerd[2017]: time="2024-11-12T17:42:30.398070081Z" level=error msg="encountered an error cleaning up failed sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.398672 containerd[2017]: time="2024-11-12T17:42:30.398504145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4twwd,Uid:2856c5dd-06c7-4491-84ce-4e60da83a6ac,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.400101 kubelet[3479]: E1112 17:42:30.400039 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.400377 kubelet[3479]: E1112 17:42:30.400340 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4twwd" Nov 12 17:42:30.400553 kubelet[3479]: E1112 17:42:30.400504 3479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-4twwd" Nov 12 17:42:30.400978 kubelet[3479]: E1112 17:42:30.400861 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-4twwd_kube-system(2856c5dd-06c7-4491-84ce-4e60da83a6ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-4twwd_kube-system(2856c5dd-06c7-4491-84ce-4e60da83a6ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4twwd" podUID="2856c5dd-06c7-4491-84ce-4e60da83a6ac" Nov 12 17:42:30.409810 containerd[2017]: time="2024-11-12T17:42:30.409704297Z" level=error msg="Failed to destroy network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.419789 containerd[2017]: time="2024-11-12T17:42:30.419653401Z" level=error msg="encountered an error cleaning up failed sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.419789 containerd[2017]: time="2024-11-12T17:42:30.419845977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b45568475-v2mpw,Uid:9324c64d-81ba-42f9-be40-b1e24856c873,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.421122 kubelet[3479]: E1112 17:42:30.420965 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.421235 kubelet[3479]: E1112 17:42:30.421168 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b45568475-v2mpw" Nov 12 17:42:30.422119 kubelet[3479]: E1112 17:42:30.421986 3479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b45568475-v2mpw" Nov 12 17:42:30.422119 kubelet[3479]: E1112 17:42:30.422244 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b45568475-v2mpw_calico-system(9324c64d-81ba-42f9-be40-b1e24856c873)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b45568475-v2mpw_calico-system(9324c64d-81ba-42f9-be40-b1e24856c873)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b45568475-v2mpw" podUID="9324c64d-81ba-42f9-be40-b1e24856c873" Nov 12 17:42:30.447727 containerd[2017]: time="2024-11-12T17:42:30.447658137Z" level=error msg="Failed to destroy network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.449126 containerd[2017]: time="2024-11-12T17:42:30.448532013Z" level=error msg="encountered an error cleaning up failed sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.449126 containerd[2017]: time="2024-11-12T17:42:30.448700217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-5zcvt,Uid:1f519653-0fdb-4b0c-9b62-0fc9a35c6511,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.449537 kubelet[3479]: E1112 17:42:30.449475 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.449637 kubelet[3479]: E1112 17:42:30.449562 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94db85cfb-5zcvt" Nov 12 17:42:30.449637 kubelet[3479]: E1112 17:42:30.449606 3479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-94db85cfb-5zcvt" Nov 12 17:42:30.449750 kubelet[3479]: E1112 17:42:30.449671 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-94db85cfb-5zcvt_calico-apiserver(1f519653-0fdb-4b0c-9b62-0fc9a35c6511)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-94db85cfb-5zcvt_calico-apiserver(1f519653-0fdb-4b0c-9b62-0fc9a35c6511)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94db85cfb-5zcvt" podUID="1f519653-0fdb-4b0c-9b62-0fc9a35c6511" Nov 12 17:42:30.463611 containerd[2017]: time="2024-11-12T17:42:30.463489545Z" level=error msg="Failed to destroy network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.464610 containerd[2017]: time="2024-11-12T17:42:30.464403381Z" level=error msg="encountered an error cleaning up failed sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.464610 containerd[2017]: time="2024-11-12T17:42:30.464521833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnp2k,Uid:272a2aec-8f98-4451-9782-58222f5f8977,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.465449 kubelet[3479]: E1112 17:42:30.465116 3479 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:30.465449 kubelet[3479]: E1112 17:42:30.465239 3479 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qnp2k" Nov 12 17:42:30.465449 kubelet[3479]: E1112 17:42:30.465290 3479 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qnp2k" Nov 12 17:42:30.467958 kubelet[3479]: E1112 17:42:30.465427 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qnp2k_calico-system(272a2aec-8f98-4451-9782-58222f5f8977)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qnp2k_calico-system(272a2aec-8f98-4451-9782-58222f5f8977)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:30.600086 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8-shm.mount: Deactivated successfully. Nov 12 17:42:30.601168 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5-shm.mount: Deactivated successfully. Nov 12 17:42:30.601376 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd-shm.mount: Deactivated successfully. Nov 12 17:42:30.601509 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810-shm.mount: Deactivated successfully. Nov 12 17:42:30.933258 kubelet[3479]: I1112 17:42:30.932177 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:30.935501 containerd[2017]: time="2024-11-12T17:42:30.933868056Z" level=info msg="StopPodSandbox for \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\"" Nov 12 17:42:30.939200 containerd[2017]: time="2024-11-12T17:42:30.936556416Z" level=info msg="Ensure that sandbox f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810 in task-service has been cleanup successfully" Nov 12 17:42:30.939395 kubelet[3479]: I1112 17:42:30.938167 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:30.940358 containerd[2017]: time="2024-11-12T17:42:30.940250028Z" level=info msg="StopPodSandbox for \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\"" Nov 12 17:42:30.942830 containerd[2017]: time="2024-11-12T17:42:30.942768972Z" level=info msg="Ensure that sandbox e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5 in task-service has been cleanup successfully" Nov 12 17:42:30.946709 kubelet[3479]: I1112 17:42:30.946439 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:30.950500 containerd[2017]: time="2024-11-12T17:42:30.950265720Z" level=info msg="StopPodSandbox for \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\"" Nov 12 17:42:30.954094 containerd[2017]: time="2024-11-12T17:42:30.953756616Z" level=info msg="Ensure that sandbox 21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507 in task-service has been cleanup successfully" Nov 12 17:42:30.958854 kubelet[3479]: I1112 17:42:30.957979 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:30.968868 containerd[2017]: time="2024-11-12T17:42:30.966609144Z" level=info msg="StopPodSandbox for \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\"" Nov 12 17:42:30.970114 containerd[2017]: time="2024-11-12T17:42:30.969480768Z" level=info msg="Ensure that sandbox cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb in task-service has been cleanup successfully" Nov 12 17:42:30.982940 kubelet[3479]: I1112 17:42:30.981041 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:30.986486 containerd[2017]: time="2024-11-12T17:42:30.986364816Z" level=info msg="StopPodSandbox for \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\"" Nov 12 17:42:30.987599 containerd[2017]: time="2024-11-12T17:42:30.987517356Z" level=info msg="Ensure that sandbox 9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8 in task-service has been cleanup successfully" Nov 12 17:42:31.015170 kubelet[3479]: I1112 17:42:31.015062 3479 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:31.019033 containerd[2017]: time="2024-11-12T17:42:31.018858800Z" level=info msg="StopPodSandbox for \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\"" Nov 12 17:42:31.026049 containerd[2017]: time="2024-11-12T17:42:31.024568508Z" level=info msg="Ensure that sandbox 67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd in task-service has been cleanup successfully" Nov 12 17:42:31.171489 containerd[2017]: time="2024-11-12T17:42:31.171384441Z" level=error msg="StopPodSandbox for \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\" failed" error="failed to destroy network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:31.172304 kubelet[3479]: E1112 17:42:31.171807 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:31.172304 kubelet[3479]: E1112 17:42:31.171946 3479 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507"} Nov 12 17:42:31.172304 kubelet[3479]: E1112 17:42:31.172048 3479 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"272a2aec-8f98-4451-9782-58222f5f8977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:31.172304 kubelet[3479]: E1112 17:42:31.172101 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"272a2aec-8f98-4451-9782-58222f5f8977\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qnp2k" podUID="272a2aec-8f98-4451-9782-58222f5f8977" Nov 12 17:42:31.216346 containerd[2017]: time="2024-11-12T17:42:31.215739213Z" level=error msg="StopPodSandbox for \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\" failed" error="failed to destroy network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:31.219636 kubelet[3479]: E1112 17:42:31.219323 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:31.219636 kubelet[3479]: E1112 17:42:31.219435 3479 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8"} Nov 12 17:42:31.219636 kubelet[3479]: E1112 17:42:31.219513 3479 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:31.219636 kubelet[3479]: E1112 17:42:31.219556 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-lczwt" podUID="e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5" Nov 12 17:42:31.243659 containerd[2017]: time="2024-11-12T17:42:31.243514389Z" level=error msg="StopPodSandbox for \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\" failed" error="failed to destroy network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:31.244921 kubelet[3479]: E1112 17:42:31.244293 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:31.244921 kubelet[3479]: E1112 17:42:31.244388 3479 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810"} Nov 12 17:42:31.244921 kubelet[3479]: E1112 17:42:31.244451 3479 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9324c64d-81ba-42f9-be40-b1e24856c873\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:31.244921 kubelet[3479]: E1112 17:42:31.244497 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9324c64d-81ba-42f9-be40-b1e24856c873\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b45568475-v2mpw" podUID="9324c64d-81ba-42f9-be40-b1e24856c873" Nov 12 17:42:31.246642 containerd[2017]: time="2024-11-12T17:42:31.246111837Z" level=error msg="StopPodSandbox for \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\" failed" error="failed to destroy network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:31.248138 kubelet[3479]: E1112 17:42:31.247127 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:31.248138 kubelet[3479]: E1112 17:42:31.247205 3479 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5"} Nov 12 17:42:31.248138 kubelet[3479]: E1112 17:42:31.247267 3479 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c76a9e4-323d-472f-a6f5-2dfa31d17b05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:31.248138 kubelet[3479]: E1112 17:42:31.247316 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c76a9e4-323d-472f-a6f5-2dfa31d17b05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94db85cfb-bhh4t" podUID="3c76a9e4-323d-472f-a6f5-2dfa31d17b05" Nov 12 17:42:31.251249 containerd[2017]: time="2024-11-12T17:42:31.251162493Z" level=error msg="StopPodSandbox for \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\" failed" error="failed to destroy network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:31.252028 kubelet[3479]: E1112 17:42:31.251574 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:31.252028 kubelet[3479]: E1112 17:42:31.251645 3479 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb"} Nov 12 17:42:31.252028 kubelet[3479]: E1112 17:42:31.251704 3479 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1f519653-0fdb-4b0c-9b62-0fc9a35c6511\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:31.252028 kubelet[3479]: E1112 17:42:31.251746 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1f519653-0fdb-4b0c-9b62-0fc9a35c6511\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-94db85cfb-5zcvt" podUID="1f519653-0fdb-4b0c-9b62-0fc9a35c6511" Nov 12 17:42:31.256951 containerd[2017]: time="2024-11-12T17:42:31.256766397Z" level=error msg="StopPodSandbox for \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\" failed" error="failed to destroy network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 17:42:31.258182 kubelet[3479]: E1112 17:42:31.257240 3479 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:31.258182 kubelet[3479]: E1112 17:42:31.257383 3479 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd"} Nov 12 17:42:31.258182 kubelet[3479]: E1112 17:42:31.257458 3479 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2856c5dd-06c7-4491-84ce-4e60da83a6ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 17:42:31.258182 kubelet[3479]: E1112 17:42:31.257505 3479 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2856c5dd-06c7-4491-84ce-4e60da83a6ac\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-4twwd" podUID="2856c5dd-06c7-4491-84ce-4e60da83a6ac" Nov 12 17:42:37.010631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2724083129.mount: Deactivated successfully. Nov 12 17:42:37.075577 containerd[2017]: time="2024-11-12T17:42:37.075168878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:37.077332 containerd[2017]: time="2024-11-12T17:42:37.077185934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=135495328" Nov 12 17:42:37.078853 containerd[2017]: time="2024-11-12T17:42:37.078748670Z" level=info msg="ImageCreate event name:\"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:37.086401 containerd[2017]: time="2024-11-12T17:42:37.086137850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:37.088816 containerd[2017]: time="2024-11-12T17:42:37.088300010Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"135495190\" in 7.151765231s" Nov 12 17:42:37.088816 containerd[2017]: time="2024-11-12T17:42:37.088422986Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:8d083b1bdef5f976f011d47e03dcb8015c1a80cb54a915c6b8e64df03f0743d5\"" Nov 12 17:42:37.124888 containerd[2017]: time="2024-11-12T17:42:37.124807179Z" level=info msg="CreateContainer within sandbox \"34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 17:42:37.159278 containerd[2017]: time="2024-11-12T17:42:37.158386971Z" level=info msg="CreateContainer within sandbox \"34c19aaae9c8bd424069723f726bb84ae6bec5090364dfe927b121c76388fe84\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"04bf7f46931e7a57623137dfcb12103cfd52fb655b0f0414defdf9e26a008660\"" Nov 12 17:42:37.161073 containerd[2017]: time="2024-11-12T17:42:37.161018751Z" level=info msg="StartContainer for \"04bf7f46931e7a57623137dfcb12103cfd52fb655b0f0414defdf9e26a008660\"" Nov 12 17:42:37.218454 systemd[1]: Started cri-containerd-04bf7f46931e7a57623137dfcb12103cfd52fb655b0f0414defdf9e26a008660.scope - libcontainer container 04bf7f46931e7a57623137dfcb12103cfd52fb655b0f0414defdf9e26a008660. Nov 12 17:42:37.306528 containerd[2017]: time="2024-11-12T17:42:37.306336783Z" level=info msg="StartContainer for \"04bf7f46931e7a57623137dfcb12103cfd52fb655b0f0414defdf9e26a008660\" returns successfully" Nov 12 17:42:37.455659 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 17:42:37.455973 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 17:42:38.124956 kubelet[3479]: I1112 17:42:38.122269 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7hn67" podStartSLOduration=3.796393729 podStartE2EDuration="21.110429583s" podCreationTimestamp="2024-11-12 17:42:17 +0000 UTC" firstStartedPulling="2024-11-12 17:42:19.777162264 +0000 UTC m=+26.418233844" lastFinishedPulling="2024-11-12 17:42:37.091198106 +0000 UTC m=+43.732269698" observedRunningTime="2024-11-12 17:42:38.108137139 +0000 UTC m=+44.749208755" watchObservedRunningTime="2024-11-12 17:42:38.110429583 +0000 UTC m=+44.751501175" Nov 12 17:42:39.452942 kubelet[3479]: I1112 17:42:39.451834 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:42:40.191958 kernel: bpftool[4818]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 17:42:40.550214 systemd-networkd[1919]: vxlan.calico: Link UP Nov 12 17:42:40.550232 systemd-networkd[1919]: vxlan.calico: Gained carrier Nov 12 17:42:40.562421 (udev-worker)[4842]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:40.616271 (udev-worker)[4841]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:42.482592 systemd-networkd[1919]: vxlan.calico: Gained IPv6LL Nov 12 17:42:42.623502 containerd[2017]: time="2024-11-12T17:42:42.623303170Z" level=info msg="StopPodSandbox for \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\"" Nov 12 17:42:42.626371 containerd[2017]: time="2024-11-12T17:42:42.626278786Z" level=info msg="StopPodSandbox for \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\"" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.833 [INFO][4915] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.835 [INFO][4915] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" iface="eth0" netns="/var/run/netns/cni-9f5a080b-e9ec-64d9-0ae8-4b24c5d42f1d" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.836 [INFO][4915] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" iface="eth0" netns="/var/run/netns/cni-9f5a080b-e9ec-64d9-0ae8-4b24c5d42f1d" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.840 [INFO][4915] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" iface="eth0" netns="/var/run/netns/cni-9f5a080b-e9ec-64d9-0ae8-4b24c5d42f1d" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.840 [INFO][4915] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.840 [INFO][4915] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.943 [INFO][4931] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.943 [INFO][4931] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.943 [INFO][4931] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.959 [WARNING][4931] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.960 [INFO][4931] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.964 [INFO][4931] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:42.978301 containerd[2017]: 2024-11-12 17:42:42.974 [INFO][4915] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:42.981988 containerd[2017]: time="2024-11-12T17:42:42.980090880Z" level=info msg="TearDown network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\" successfully" Nov 12 17:42:42.982150 containerd[2017]: time="2024-11-12T17:42:42.980412492Z" level=info msg="StopPodSandbox for \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\" returns successfully" Nov 12 17:42:42.989403 systemd[1]: run-netns-cni\x2d9f5a080b\x2de9ec\x2d64d9\x2d0ae8\x2d4b24c5d42f1d.mount: Deactivated successfully. Nov 12 17:42:42.994532 containerd[2017]: time="2024-11-12T17:42:42.994480440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-5zcvt,Uid:1f519653-0fdb-4b0c-9b62-0fc9a35c6511,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.842 [INFO][4919] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.842 [INFO][4919] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" iface="eth0" netns="/var/run/netns/cni-f0063983-d5b9-babf-3e8f-dc92e9f9efd4" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.843 [INFO][4919] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" iface="eth0" netns="/var/run/netns/cni-f0063983-d5b9-babf-3e8f-dc92e9f9efd4" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.844 [INFO][4919] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" iface="eth0" netns="/var/run/netns/cni-f0063983-d5b9-babf-3e8f-dc92e9f9efd4" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.845 [INFO][4919] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.846 [INFO][4919] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.952 [INFO][4932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.952 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.964 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.999 [WARNING][4932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:42.999 [INFO][4932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:43.004 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:43.011447 containerd[2017]: 2024-11-12 17:42:43.008 [INFO][4919] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:43.018364 containerd[2017]: time="2024-11-12T17:42:43.012520520Z" level=info msg="TearDown network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\" successfully" Nov 12 17:42:43.018364 containerd[2017]: time="2024-11-12T17:42:43.012574880Z" level=info msg="StopPodSandbox for \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\" returns successfully" Nov 12 17:42:43.018364 containerd[2017]: time="2024-11-12T17:42:43.014648132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnp2k,Uid:272a2aec-8f98-4451-9782-58222f5f8977,Namespace:calico-system,Attempt:1,}" Nov 12 17:42:43.023455 systemd[1]: run-netns-cni\x2df0063983\x2dd5b9\x2dbabf\x2d3e8f\x2ddc92e9f9efd4.mount: Deactivated successfully. Nov 12 17:42:43.407667 systemd-networkd[1919]: caliee0c0d8b1d0: Link UP Nov 12 17:42:43.410186 systemd-networkd[1919]: caliee0c0d8b1d0: Gained carrier Nov 12 17:42:43.412171 (udev-worker)[4857]: Network interface NamePolicy= disabled on kernel command line. Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.178 [INFO][4944] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0 calico-apiserver-94db85cfb- calico-apiserver 1f519653-0fdb-4b0c-9b62-0fc9a35c6511 801 0 2024-11-12 17:42:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94db85cfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-62 calico-apiserver-94db85cfb-5zcvt eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliee0c0d8b1d0 [] []}} ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.179 [INFO][4944] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.295 [INFO][4967] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" HandleID="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.325 [INFO][4967] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" HandleID="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400029ebc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-62", "pod":"calico-apiserver-94db85cfb-5zcvt", "timestamp":"2024-11-12 17:42:43.295485417 +0000 UTC"}, Hostname:"ip-172-31-24-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.325 [INFO][4967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.325 [INFO][4967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.325 [INFO][4967] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-62' Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.330 [INFO][4967] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.340 [INFO][4967] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.348 [INFO][4967] ipam/ipam.go 489: Trying affinity for 192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.353 [INFO][4967] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.358 [INFO][4967] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.358 [INFO][4967] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.362 [INFO][4967] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498 Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.375 [INFO][4967] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.389 [INFO][4967] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.129/26] block=192.168.122.128/26 handle="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.389 [INFO][4967] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.129/26] handle="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" host="ip-172-31-24-62" Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.389 [INFO][4967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:43.491365 containerd[2017]: 2024-11-12 17:42:43.390 [INFO][4967] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.129/26] IPv6=[] ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" HandleID="k8s-pod-network.04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:43.495997 containerd[2017]: 2024-11-12 17:42:43.395 [INFO][4944] cni-plugin/k8s.go 386: Populated endpoint ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f519653-0fdb-4b0c-9b62-0fc9a35c6511", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"", Pod:"calico-apiserver-94db85cfb-5zcvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee0c0d8b1d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:43.495997 containerd[2017]: 2024-11-12 17:42:43.395 [INFO][4944] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.129/32] ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:43.495997 containerd[2017]: 2024-11-12 17:42:43.395 [INFO][4944] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee0c0d8b1d0 ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:43.495997 containerd[2017]: 2024-11-12 17:42:43.420 [INFO][4944] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:43.495997 containerd[2017]: 2024-11-12 17:42:43.422 [INFO][4944] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f519653-0fdb-4b0c-9b62-0fc9a35c6511", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498", Pod:"calico-apiserver-94db85cfb-5zcvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee0c0d8b1d0", MAC:"6e:43:9d:0a:c4:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:43.495997 containerd[2017]: 2024-11-12 17:42:43.479 [INFO][4944] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-5zcvt" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:43.610084 containerd[2017]: time="2024-11-12T17:42:43.603989435Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:43.610084 containerd[2017]: time="2024-11-12T17:42:43.604120391Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:43.610084 containerd[2017]: time="2024-11-12T17:42:43.604166111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:43.610084 containerd[2017]: time="2024-11-12T17:42:43.604798907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:43.618371 systemd-networkd[1919]: calife98935dc8b: Link UP Nov 12 17:42:43.621858 systemd-networkd[1919]: calife98935dc8b: Gained carrier Nov 12 17:42:43.632527 containerd[2017]: time="2024-11-12T17:42:43.631830095Z" level=info msg="StopPodSandbox for \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\"" Nov 12 17:42:43.637148 containerd[2017]: time="2024-11-12T17:42:43.636080051Z" level=info msg="StopPodSandbox for \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\"" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.210 [INFO][4953] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0 csi-node-driver- calico-system 272a2aec-8f98-4451-9782-58222f5f8977 802 0 2024-11-12 17:42:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-24-62 csi-node-driver-qnp2k eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calife98935dc8b [] []}} ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.210 [INFO][4953] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.294 [INFO][4971] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" HandleID="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.328 [INFO][4971] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" HandleID="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333870), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-62", "pod":"csi-node-driver-qnp2k", "timestamp":"2024-11-12 17:42:43.294519405 +0000 UTC"}, Hostname:"ip-172-31-24-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.328 [INFO][4971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.390 [INFO][4971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.391 [INFO][4971] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-62' Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.400 [INFO][4971] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.431 [INFO][4971] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.472 [INFO][4971] ipam/ipam.go 489: Trying affinity for 192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.484 [INFO][4971] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.518 [INFO][4971] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.522 [INFO][4971] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.531 [INFO][4971] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086 Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.551 [INFO][4971] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.576 [INFO][4971] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.130/26] block=192.168.122.128/26 handle="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.576 [INFO][4971] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.130/26] handle="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" host="ip-172-31-24-62" Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.576 [INFO][4971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:43.726635 containerd[2017]: 2024-11-12 17:42:43.576 [INFO][4971] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.130/26] IPv6=[] ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" HandleID="k8s-pod-network.d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.728060 containerd[2017]: 2024-11-12 17:42:43.591 [INFO][4953] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"272a2aec-8f98-4451-9782-58222f5f8977", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"", Pod:"csi-node-driver-qnp2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife98935dc8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:43.728060 containerd[2017]: 2024-11-12 17:42:43.591 [INFO][4953] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.130/32] ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.728060 containerd[2017]: 2024-11-12 17:42:43.591 [INFO][4953] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife98935dc8b ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.728060 containerd[2017]: 2024-11-12 17:42:43.627 [INFO][4953] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.728060 containerd[2017]: 2024-11-12 17:42:43.661 [INFO][4953] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"272a2aec-8f98-4451-9782-58222f5f8977", ResourceVersion:"802", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086", Pod:"csi-node-driver-qnp2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife98935dc8b", MAC:"be:10:ac:45:61:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:43.728060 containerd[2017]: 2024-11-12 17:42:43.706 [INFO][4953] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086" Namespace="calico-system" Pod="csi-node-driver-qnp2k" WorkloadEndpoint="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:43.785199 systemd[1]: Started cri-containerd-04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498.scope - libcontainer container 04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498. Nov 12 17:42:43.848843 containerd[2017]: time="2024-11-12T17:42:43.848309748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:43.848843 containerd[2017]: time="2024-11-12T17:42:43.848653956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:43.849166 containerd[2017]: time="2024-11-12T17:42:43.848749716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:43.850214 containerd[2017]: time="2024-11-12T17:42:43.850088532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:43.963550 systemd[1]: Started cri-containerd-d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086.scope - libcontainer container d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086. Nov 12 17:42:44.144390 containerd[2017]: time="2024-11-12T17:42:44.144305301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-5zcvt,Uid:1f519653-0fdb-4b0c-9b62-0fc9a35c6511,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498\"" Nov 12 17:42:44.184724 containerd[2017]: time="2024-11-12T17:42:44.184358134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:43.977 [INFO][5026] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:43.978 [INFO][5026] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" iface="eth0" netns="/var/run/netns/cni-45e6531d-bee8-a3f5-2c39-7e65f6756564" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:43.980 [INFO][5026] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" iface="eth0" netns="/var/run/netns/cni-45e6531d-bee8-a3f5-2c39-7e65f6756564" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:43.983 [INFO][5026] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" iface="eth0" netns="/var/run/netns/cni-45e6531d-bee8-a3f5-2c39-7e65f6756564" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:43.983 [INFO][5026] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:43.983 [INFO][5026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:44.126 [INFO][5111] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:44.126 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:44.126 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:44.174 [WARNING][5111] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:44.174 [INFO][5111] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:44.180 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:44.205553 containerd[2017]: 2024-11-12 17:42:44.198 [INFO][5026] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:44.214429 containerd[2017]: time="2024-11-12T17:42:44.211985926Z" level=info msg="TearDown network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\" successfully" Nov 12 17:42:44.214429 containerd[2017]: time="2024-11-12T17:42:44.214248094Z" level=info msg="StopPodSandbox for \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\" returns successfully" Nov 12 17:42:44.215750 systemd[1]: run-netns-cni\x2d45e6531d\x2dbee8\x2da3f5\x2d2c39\x2d7e65f6756564.mount: Deactivated successfully. Nov 12 17:42:44.224175 containerd[2017]: time="2024-11-12T17:42:44.222350518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4twwd,Uid:2856c5dd-06c7-4491-84ce-4e60da83a6ac,Namespace:kube-system,Attempt:1,}" Nov 12 17:42:44.262119 containerd[2017]: time="2024-11-12T17:42:44.261714106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qnp2k,Uid:272a2aec-8f98-4451-9782-58222f5f8977,Namespace:calico-system,Attempt:1,} returns sandbox id \"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086\"" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.049 [INFO][5055] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.050 [INFO][5055] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" iface="eth0" netns="/var/run/netns/cni-4e82fc35-771a-6f5f-a8f3-1b4ce30d3b66" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.052 [INFO][5055] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" iface="eth0" netns="/var/run/netns/cni-4e82fc35-771a-6f5f-a8f3-1b4ce30d3b66" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.054 [INFO][5055] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" iface="eth0" netns="/var/run/netns/cni-4e82fc35-771a-6f5f-a8f3-1b4ce30d3b66" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.056 [INFO][5055] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.057 [INFO][5055] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.326 [INFO][5120] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.327 [INFO][5120] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.327 [INFO][5120] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.358 [WARNING][5120] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.358 [INFO][5120] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.367 [INFO][5120] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:44.397049 containerd[2017]: 2024-11-12 17:42:44.387 [INFO][5055] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:44.397049 containerd[2017]: time="2024-11-12T17:42:44.397024643Z" level=info msg="TearDown network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\" successfully" Nov 12 17:42:44.403988 containerd[2017]: time="2024-11-12T17:42:44.397071395Z" level=info msg="StopPodSandbox for \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\" returns successfully" Nov 12 17:42:44.403248 systemd[1]: run-netns-cni\x2d4e82fc35\x2d771a\x2d6f5f\x2da8f3\x2d1b4ce30d3b66.mount: Deactivated successfully. Nov 12 17:42:44.409324 containerd[2017]: time="2024-11-12T17:42:44.408852179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-bhh4t,Uid:3c76a9e4-323d-472f-a6f5-2dfa31d17b05,Namespace:calico-apiserver,Attempt:1,}" Nov 12 17:42:44.626957 containerd[2017]: time="2024-11-12T17:42:44.625747464Z" level=info msg="StopPodSandbox for \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\"" Nov 12 17:42:45.092729 systemd-networkd[1919]: cali66b4c95d73f: Link UP Nov 12 17:42:45.133053 systemd-networkd[1919]: cali66b4c95d73f: Gained carrier Nov 12 17:42:45.197612 systemd[1]: Started sshd@9-172.31.24.62:22-139.178.89.65:42888.service - OpenSSH per-connection server daemon (139.178.89.65:42888). Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.453 [INFO][5140] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0 coredns-7db6d8ff4d- kube-system 2856c5dd-06c7-4491-84ce-4e60da83a6ac 811 0 2024-11-12 17:42:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-62 coredns-7db6d8ff4d-4twwd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali66b4c95d73f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.454 [INFO][5140] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.535 [INFO][5161] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" HandleID="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.614 [INFO][5161] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" HandleID="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000318d30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-62", "pod":"coredns-7db6d8ff4d-4twwd", "timestamp":"2024-11-12 17:42:44.535490771 +0000 UTC"}, Hostname:"ip-172-31-24-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.614 [INFO][5161] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.618 [INFO][5161] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.618 [INFO][5161] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-62' Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.641 [INFO][5161] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.669 [INFO][5161] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.724 [INFO][5161] ipam/ipam.go 489: Trying affinity for 192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.786 [INFO][5161] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.870 [INFO][5161] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.870 [INFO][5161] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.891 [INFO][5161] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:44.929 [INFO][5161] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:45.027 [INFO][5161] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.131/26] block=192.168.122.128/26 handle="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:45.030 [INFO][5161] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.131/26] handle="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" host="ip-172-31-24-62" Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:45.030 [INFO][5161] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:45.221331 containerd[2017]: 2024-11-12 17:42:45.030 [INFO][5161] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.131/26] IPv6=[] ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" HandleID="k8s-pod-network.a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:45.231617 containerd[2017]: 2024-11-12 17:42:45.043 [INFO][5140] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2856c5dd-06c7-4491-84ce-4e60da83a6ac", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"", Pod:"coredns-7db6d8ff4d-4twwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66b4c95d73f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:45.231617 containerd[2017]: 2024-11-12 17:42:45.044 [INFO][5140] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.131/32] ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:45.231617 containerd[2017]: 2024-11-12 17:42:45.044 [INFO][5140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66b4c95d73f ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:45.231617 containerd[2017]: 2024-11-12 17:42:45.137 [INFO][5140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:45.231617 containerd[2017]: 2024-11-12 17:42:45.151 [INFO][5140] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2856c5dd-06c7-4491-84ce-4e60da83a6ac", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e", Pod:"coredns-7db6d8ff4d-4twwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66b4c95d73f", MAC:"2a:ac:4d:f8:2e:a0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:45.231617 containerd[2017]: 2024-11-12 17:42:45.188 [INFO][5140] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-4twwd" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:45.235736 systemd-networkd[1919]: caliee0c0d8b1d0: Gained IPv6LL Nov 12 17:42:45.346288 systemd-networkd[1919]: cali39cdcaf8b6a: Link UP Nov 12 17:42:45.351741 systemd-networkd[1919]: cali39cdcaf8b6a: Gained carrier Nov 12 17:42:45.367192 containerd[2017]: time="2024-11-12T17:42:45.366856367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:45.367979 containerd[2017]: time="2024-11-12T17:42:45.367719239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:45.368721 containerd[2017]: time="2024-11-12T17:42:45.368235107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:45.391051 containerd[2017]: time="2024-11-12T17:42:45.380501520Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:44.559 [INFO][5151] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0 calico-apiserver-94db85cfb- calico-apiserver 3c76a9e4-323d-472f-a6f5-2dfa31d17b05 813 0 2024-11-12 17:42:16 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:94db85cfb projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-24-62 calico-apiserver-94db85cfb-bhh4t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39cdcaf8b6a [] []}} ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:44.559 [INFO][5151] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:44.765 [INFO][5174] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" HandleID="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:44.936 [INFO][5174] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" HandleID="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b9c70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-24-62", "pod":"calico-apiserver-94db85cfb-bhh4t", "timestamp":"2024-11-12 17:42:44.765585444 +0000 UTC"}, Hostname:"ip-172-31-24-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:44.936 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.032 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.032 [INFO][5174] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-62' Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.127 [INFO][5174] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.202 [INFO][5174] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.245 [INFO][5174] ipam/ipam.go 489: Trying affinity for 192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.264 [INFO][5174] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.275 [INFO][5174] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.275 [INFO][5174] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.286 [INFO][5174] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334 Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.299 [INFO][5174] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.317 [INFO][5174] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.132/26] block=192.168.122.128/26 handle="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.318 [INFO][5174] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.132/26] handle="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" host="ip-172-31-24-62" Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.318 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:45.485925 containerd[2017]: 2024-11-12 17:42:45.318 [INFO][5174] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.132/26] IPv6=[] ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" HandleID="k8s-pod-network.b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:45.490496 containerd[2017]: 2024-11-12 17:42:45.330 [INFO][5151] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c76a9e4-323d-472f-a6f5-2dfa31d17b05", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"", Pod:"calico-apiserver-94db85cfb-bhh4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cdcaf8b6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:45.490496 containerd[2017]: 2024-11-12 17:42:45.330 [INFO][5151] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.132/32] ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:45.490496 containerd[2017]: 2024-11-12 17:42:45.330 [INFO][5151] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali39cdcaf8b6a ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:45.490496 containerd[2017]: 2024-11-12 17:42:45.350 [INFO][5151] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:45.490496 containerd[2017]: 2024-11-12 17:42:45.363 [INFO][5151] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c76a9e4-323d-472f-a6f5-2dfa31d17b05", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334", Pod:"calico-apiserver-94db85cfb-bhh4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cdcaf8b6a", MAC:"16:1c:4b:0d:f3:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:45.490496 containerd[2017]: 2024-11-12 17:42:45.428 [INFO][5151] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334" Namespace="calico-apiserver" Pod="calico-apiserver-94db85cfb-bhh4t" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:45.492208 systemd-networkd[1919]: calife98935dc8b: Gained IPv6LL Nov 12 17:42:45.494622 sshd[5206]: Accepted publickey for core from 139.178.89.65 port 42888 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:45.505842 sshd[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:44.987 [INFO][5186] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:44.990 [INFO][5186] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" iface="eth0" netns="/var/run/netns/cni-3e9ba08e-074a-b3b8-3731-1915bbbd6a81" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:44.991 [INFO][5186] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" iface="eth0" netns="/var/run/netns/cni-3e9ba08e-074a-b3b8-3731-1915bbbd6a81" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:44.992 [INFO][5186] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" iface="eth0" netns="/var/run/netns/cni-3e9ba08e-074a-b3b8-3731-1915bbbd6a81" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:44.992 [INFO][5186] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:44.992 [INFO][5186] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:45.294 [INFO][5197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:45.294 [INFO][5197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:45.321 [INFO][5197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:45.379 [WARNING][5197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:45.379 [INFO][5197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:45.404 [INFO][5197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:45.527645 containerd[2017]: 2024-11-12 17:42:45.483 [INFO][5186] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:45.531056 containerd[2017]: time="2024-11-12T17:42:45.528511992Z" level=info msg="TearDown network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\" successfully" Nov 12 17:42:45.531056 containerd[2017]: time="2024-11-12T17:42:45.528573252Z" level=info msg="StopPodSandbox for \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\" returns successfully" Nov 12 17:42:45.545988 systemd[1]: run-netns-cni\x2d3e9ba08e\x2d074a\x2db3b8\x2d3731\x2d1915bbbd6a81.mount: Deactivated successfully. Nov 12 17:42:45.548655 containerd[2017]: time="2024-11-12T17:42:45.546627936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lczwt,Uid:e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5,Namespace:kube-system,Attempt:1,}" Nov 12 17:42:45.569980 systemd[1]: Started cri-containerd-a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e.scope - libcontainer container a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e. Nov 12 17:42:45.578499 systemd-logind[1993]: New session 10 of user core. Nov 12 17:42:45.590441 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 17:42:45.760240 containerd[2017]: time="2024-11-12T17:42:45.757404037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:45.760240 containerd[2017]: time="2024-11-12T17:42:45.757524613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:45.760240 containerd[2017]: time="2024-11-12T17:42:45.757573597Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:45.760240 containerd[2017]: time="2024-11-12T17:42:45.757780681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:45.907539 systemd[1]: Started cri-containerd-b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334.scope - libcontainer container b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334. Nov 12 17:42:45.957389 containerd[2017]: time="2024-11-12T17:42:45.957312566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-4twwd,Uid:2856c5dd-06c7-4491-84ce-4e60da83a6ac,Namespace:kube-system,Attempt:1,} returns sandbox id \"a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e\"" Nov 12 17:42:46.043507 containerd[2017]: time="2024-11-12T17:42:46.043266191Z" level=info msg="CreateContainer within sandbox \"a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:42:46.133513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757202661.mount: Deactivated successfully. Nov 12 17:42:46.166714 containerd[2017]: time="2024-11-12T17:42:46.162209279Z" level=info msg="CreateContainer within sandbox \"a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"618e40a7f5ccd764ea046b952e43404efbf889fe92d2f7918b6fc19c7b808541\"" Nov 12 17:42:46.182130 containerd[2017]: time="2024-11-12T17:42:46.175655591Z" level=info msg="StartContainer for \"618e40a7f5ccd764ea046b952e43404efbf889fe92d2f7918b6fc19c7b808541\"" Nov 12 17:42:46.303695 sshd[5206]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:46.322438 systemd-networkd[1919]: cali66b4c95d73f: Gained IPv6LL Nov 12 17:42:46.325093 systemd-logind[1993]: Session 10 logged out. Waiting for processes to exit. Nov 12 17:42:46.327255 systemd[1]: sshd@9-172.31.24.62:22-139.178.89.65:42888.service: Deactivated successfully. Nov 12 17:42:46.363671 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 17:42:46.378295 systemd-logind[1993]: Removed session 10. Nov 12 17:42:46.439341 systemd[1]: Started cri-containerd-618e40a7f5ccd764ea046b952e43404efbf889fe92d2f7918b6fc19c7b808541.scope - libcontainer container 618e40a7f5ccd764ea046b952e43404efbf889fe92d2f7918b6fc19c7b808541. Nov 12 17:42:46.480083 containerd[2017]: time="2024-11-12T17:42:46.480017893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-94db85cfb-bhh4t,Uid:3c76a9e4-323d-472f-a6f5-2dfa31d17b05,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334\"" Nov 12 17:42:46.600558 containerd[2017]: time="2024-11-12T17:42:46.600370046Z" level=info msg="StartContainer for \"618e40a7f5ccd764ea046b952e43404efbf889fe92d2f7918b6fc19c7b808541\" returns successfully" Nov 12 17:42:46.644280 systemd-networkd[1919]: caliec6c29922f3: Link UP Nov 12 17:42:46.648428 systemd-networkd[1919]: caliec6c29922f3: Gained carrier Nov 12 17:42:46.658293 containerd[2017]: time="2024-11-12T17:42:46.658235882Z" level=info msg="StopPodSandbox for \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\"" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:45.976 [INFO][5278] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0 coredns-7db6d8ff4d- kube-system e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5 844 0 2024-11-12 17:42:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-24-62 coredns-7db6d8ff4d-lczwt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliec6c29922f3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:45.978 [INFO][5278] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.404 [INFO][5337] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" HandleID="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.481 [INFO][5337] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" HandleID="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a0350), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-24-62", "pod":"coredns-7db6d8ff4d-lczwt", "timestamp":"2024-11-12 17:42:46.404327173 +0000 UTC"}, Hostname:"ip-172-31-24-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.482 [INFO][5337] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.483 [INFO][5337] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.483 [INFO][5337] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-62' Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.490 [INFO][5337] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.502 [INFO][5337] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.522 [INFO][5337] ipam/ipam.go 489: Trying affinity for 192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.528 [INFO][5337] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.540 [INFO][5337] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.540 [INFO][5337] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.549 [INFO][5337] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9 Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.570 [INFO][5337] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.612 [INFO][5337] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.133/26] block=192.168.122.128/26 handle="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.612 [INFO][5337] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.133/26] handle="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" host="ip-172-31-24-62" Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.612 [INFO][5337] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:46.767173 containerd[2017]: 2024-11-12 17:42:46.612 [INFO][5337] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.133/26] IPv6=[] ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" HandleID="k8s-pod-network.d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:46.771393 containerd[2017]: 2024-11-12 17:42:46.619 [INFO][5278] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"", Pod:"coredns-7db6d8ff4d-lczwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec6c29922f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:46.771393 containerd[2017]: 2024-11-12 17:42:46.619 [INFO][5278] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.133/32] ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:46.771393 containerd[2017]: 2024-11-12 17:42:46.619 [INFO][5278] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec6c29922f3 ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:46.771393 containerd[2017]: 2024-11-12 17:42:46.649 [INFO][5278] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:46.771393 containerd[2017]: 2024-11-12 17:42:46.652 [INFO][5278] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9", Pod:"coredns-7db6d8ff4d-lczwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec6c29922f3", MAC:"8a:1d:03:1c:4d:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:46.771393 containerd[2017]: 2024-11-12 17:42:46.754 [INFO][5278] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9" Namespace="kube-system" Pod="coredns-7db6d8ff4d-lczwt" WorkloadEndpoint="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:46.904119 containerd[2017]: time="2024-11-12T17:42:46.902801919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:46.904119 containerd[2017]: time="2024-11-12T17:42:46.903337767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:46.904119 containerd[2017]: time="2024-11-12T17:42:46.903468039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:46.909883 containerd[2017]: time="2024-11-12T17:42:46.907744311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:47.020525 systemd[1]: Started cri-containerd-d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9.scope - libcontainer container d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9. Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.019 [INFO][5416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.020 [INFO][5416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" iface="eth0" netns="/var/run/netns/cni-5198339a-a109-4b97-188c-3143c9ec88e5" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.021 [INFO][5416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" iface="eth0" netns="/var/run/netns/cni-5198339a-a109-4b97-188c-3143c9ec88e5" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.023 [INFO][5416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" iface="eth0" netns="/var/run/netns/cni-5198339a-a109-4b97-188c-3143c9ec88e5" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.024 [INFO][5416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.024 [INFO][5416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.111 [INFO][5460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.114 [INFO][5460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.114 [INFO][5460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.139 [WARNING][5460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.139 [INFO][5460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.143 [INFO][5460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:47.151429 containerd[2017]: 2024-11-12 17:42:47.146 [INFO][5416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:47.163227 containerd[2017]: time="2024-11-12T17:42:47.156108636Z" level=info msg="TearDown network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\" successfully" Nov 12 17:42:47.163227 containerd[2017]: time="2024-11-12T17:42:47.157066788Z" level=info msg="StopPodSandbox for \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\" returns successfully" Nov 12 17:42:47.163227 containerd[2017]: time="2024-11-12T17:42:47.159885252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b45568475-v2mpw,Uid:9324c64d-81ba-42f9-be40-b1e24856c873,Namespace:calico-system,Attempt:1,}" Nov 12 17:42:47.163654 systemd[1]: run-netns-cni\x2d5198339a\x2da109\x2d4b97\x2d188c\x2d3143c9ec88e5.mount: Deactivated successfully. Nov 12 17:42:47.284876 systemd-networkd[1919]: cali39cdcaf8b6a: Gained IPv6LL Nov 12 17:42:47.448498 containerd[2017]: time="2024-11-12T17:42:47.448331534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-lczwt,Uid:e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5,Namespace:kube-system,Attempt:1,} returns sandbox id \"d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9\"" Nov 12 17:42:47.466589 containerd[2017]: time="2024-11-12T17:42:47.466500650Z" level=info msg="CreateContainer within sandbox \"d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 17:42:47.484128 kubelet[3479]: I1112 17:42:47.483963 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-4twwd" podStartSLOduration=39.483877694 podStartE2EDuration="39.483877694s" podCreationTimestamp="2024-11-12 17:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:42:47.400161362 +0000 UTC m=+54.041233446" watchObservedRunningTime="2024-11-12 17:42:47.483877694 +0000 UTC m=+54.124949310" Nov 12 17:42:47.541653 containerd[2017]: time="2024-11-12T17:42:47.541429634Z" level=info msg="CreateContainer within sandbox \"d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"06247442de99aaee9822781cc81706a90772f420fa9845c144893aca71057789\"" Nov 12 17:42:47.547213 containerd[2017]: time="2024-11-12T17:42:47.547154846Z" level=info msg="StartContainer for \"06247442de99aaee9822781cc81706a90772f420fa9845c144893aca71057789\"" Nov 12 17:42:47.652581 systemd[1]: Started cri-containerd-06247442de99aaee9822781cc81706a90772f420fa9845c144893aca71057789.scope - libcontainer container 06247442de99aaee9822781cc81706a90772f420fa9845c144893aca71057789. Nov 12 17:42:47.840933 containerd[2017]: time="2024-11-12T17:42:47.840637072Z" level=info msg="StartContainer for \"06247442de99aaee9822781cc81706a90772f420fa9845c144893aca71057789\" returns successfully" Nov 12 17:42:48.011326 systemd-networkd[1919]: calia02bfb7bc1c: Link UP Nov 12 17:42:48.014738 systemd-networkd[1919]: calia02bfb7bc1c: Gained carrier Nov 12 17:42:48.037923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3481019730.mount: Deactivated successfully. Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.552 [INFO][5475] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0 calico-kube-controllers-6b45568475- calico-system 9324c64d-81ba-42f9-be40-b1e24856c873 873 0 2024-11-12 17:42:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b45568475 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-24-62 calico-kube-controllers-6b45568475-v2mpw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia02bfb7bc1c [] []}} ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.552 [INFO][5475] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.870 [INFO][5517] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" HandleID="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.898 [INFO][5517] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" HandleID="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000371da0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-24-62", "pod":"calico-kube-controllers-6b45568475-v2mpw", "timestamp":"2024-11-12 17:42:47.870509704 +0000 UTC"}, Hostname:"ip-172-31-24-62", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.898 [INFO][5517] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.898 [INFO][5517] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.899 [INFO][5517] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-24-62' Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.907 [INFO][5517] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.923 [INFO][5517] ipam/ipam.go 372: Looking up existing affinities for host host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.940 [INFO][5517] ipam/ipam.go 489: Trying affinity for 192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.946 [INFO][5517] ipam/ipam.go 155: Attempting to load block cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.959 [INFO][5517] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.122.128/26 host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.959 [INFO][5517] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.122.128/26 handle="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.964 [INFO][5517] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98 Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.974 [INFO][5517] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.122.128/26 handle="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.992 [INFO][5517] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.122.134/26] block=192.168.122.128/26 handle="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.992 [INFO][5517] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.122.134/26] handle="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" host="ip-172-31-24-62" Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.992 [INFO][5517] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:48.079489 containerd[2017]: 2024-11-12 17:42:47.992 [INFO][5517] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.122.134/26] IPv6=[] ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" HandleID="k8s-pod-network.eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:48.083416 containerd[2017]: 2024-11-12 17:42:47.997 [INFO][5475] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0", GenerateName:"calico-kube-controllers-6b45568475-", Namespace:"calico-system", SelfLink:"", UID:"9324c64d-81ba-42f9-be40-b1e24856c873", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b45568475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"", Pod:"calico-kube-controllers-6b45568475-v2mpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia02bfb7bc1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:48.083416 containerd[2017]: 2024-11-12 17:42:47.997 [INFO][5475] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.122.134/32] ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:48.083416 containerd[2017]: 2024-11-12 17:42:47.997 [INFO][5475] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia02bfb7bc1c ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:48.083416 containerd[2017]: 2024-11-12 17:42:48.015 [INFO][5475] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:48.083416 containerd[2017]: 2024-11-12 17:42:48.020 [INFO][5475] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0", GenerateName:"calico-kube-controllers-6b45568475-", Namespace:"calico-system", SelfLink:"", UID:"9324c64d-81ba-42f9-be40-b1e24856c873", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b45568475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98", Pod:"calico-kube-controllers-6b45568475-v2mpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia02bfb7bc1c", MAC:"6a:30:e6:3c:6f:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:48.083416 containerd[2017]: 2024-11-12 17:42:48.074 [INFO][5475] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98" Namespace="calico-system" Pod="calico-kube-controllers-6b45568475-v2mpw" WorkloadEndpoint="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:48.154395 containerd[2017]: time="2024-11-12T17:42:48.154070017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 17:42:48.157682 containerd[2017]: time="2024-11-12T17:42:48.157264981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 17:42:48.157682 containerd[2017]: time="2024-11-12T17:42:48.157474189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:48.159703 containerd[2017]: time="2024-11-12T17:42:48.159493993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 17:42:48.243881 systemd[1]: Started cri-containerd-eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98.scope - libcontainer container eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98. Nov 12 17:42:48.307393 systemd-networkd[1919]: caliec6c29922f3: Gained IPv6LL Nov 12 17:42:48.382416 kubelet[3479]: I1112 17:42:48.380600 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-lczwt" podStartSLOduration=40.380553314 podStartE2EDuration="40.380553314s" podCreationTimestamp="2024-11-12 17:42:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 17:42:48.380478146 +0000 UTC m=+55.021549738" watchObservedRunningTime="2024-11-12 17:42:48.380553314 +0000 UTC m=+55.021624918" Nov 12 17:42:48.753959 containerd[2017]: time="2024-11-12T17:42:48.753176728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b45568475-v2mpw,Uid:9324c64d-81ba-42f9-be40-b1e24856c873,Namespace:calico-system,Attempt:1,} returns sandbox id \"eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98\"" Nov 12 17:42:50.034762 systemd-networkd[1919]: calia02bfb7bc1c: Gained IPv6LL Nov 12 17:42:50.346378 containerd[2017]: time="2024-11-12T17:42:50.346282108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:50.351969 containerd[2017]: time="2024-11-12T17:42:50.351469288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=39277239" Nov 12 17:42:50.353639 containerd[2017]: time="2024-11-12T17:42:50.353557612Z" level=info msg="ImageCreate event name:\"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:50.366199 containerd[2017]: time="2024-11-12T17:42:50.365980684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:50.371744 containerd[2017]: time="2024-11-12T17:42:50.370633168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 6.186208278s" Nov 12 17:42:50.371744 containerd[2017]: time="2024-11-12T17:42:50.370729444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:42:50.374599 containerd[2017]: time="2024-11-12T17:42:50.374522980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 17:42:50.386885 containerd[2017]: time="2024-11-12T17:42:50.386779792Z" level=info msg="CreateContainer within sandbox \"04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:42:50.457724 containerd[2017]: time="2024-11-12T17:42:50.457523369Z" level=info msg="CreateContainer within sandbox \"04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"acf46d72a2b7f922c7d09a4a99845dc59fd8d6c409d0640593d317a185d51410\"" Nov 12 17:42:50.462002 containerd[2017]: time="2024-11-12T17:42:50.459541097Z" level=info msg="StartContainer for \"acf46d72a2b7f922c7d09a4a99845dc59fd8d6c409d0640593d317a185d51410\"" Nov 12 17:42:50.574555 systemd[1]: Started cri-containerd-acf46d72a2b7f922c7d09a4a99845dc59fd8d6c409d0640593d317a185d51410.scope - libcontainer container acf46d72a2b7f922c7d09a4a99845dc59fd8d6c409d0640593d317a185d51410. Nov 12 17:42:50.732543 containerd[2017]: time="2024-11-12T17:42:50.732325314Z" level=info msg="StartContainer for \"acf46d72a2b7f922c7d09a4a99845dc59fd8d6c409d0640593d317a185d51410\" returns successfully" Nov 12 17:42:51.343701 systemd[1]: Started sshd@10-172.31.24.62:22-139.178.89.65:60768.service - OpenSSH per-connection server daemon (139.178.89.65:60768). Nov 12 17:42:51.559383 sshd[5656]: Accepted publickey for core from 139.178.89.65 port 60768 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:51.564891 sshd[5656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:51.579993 systemd-logind[1993]: New session 11 of user core. Nov 12 17:42:51.590373 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 17:42:52.185938 sshd[5656]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:52.196981 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 17:42:52.200560 systemd[1]: sshd@10-172.31.24.62:22-139.178.89.65:60768.service: Deactivated successfully. Nov 12 17:42:52.210353 systemd-logind[1993]: Session 11 logged out. Waiting for processes to exit. Nov 12 17:42:52.213519 systemd-logind[1993]: Removed session 11. Nov 12 17:42:52.237009 containerd[2017]: time="2024-11-12T17:42:52.236827290Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:52.240283 containerd[2017]: time="2024-11-12T17:42:52.240175662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7464731" Nov 12 17:42:52.243952 containerd[2017]: time="2024-11-12T17:42:52.242946462Z" level=info msg="ImageCreate event name:\"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:52.249355 containerd[2017]: time="2024-11-12T17:42:52.248302818Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:52.250431 containerd[2017]: time="2024-11-12T17:42:52.249314502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"8834367\" in 1.874710726s" Nov 12 17:42:52.251396 containerd[2017]: time="2024-11-12T17:42:52.251359434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:7c36e10791d457ced41235b20bab3cd8f54891dd8f7ddaa627378845532c8737\"" Nov 12 17:42:52.253613 containerd[2017]: time="2024-11-12T17:42:52.253572066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 17:42:52.260760 containerd[2017]: time="2024-11-12T17:42:52.260234070Z" level=info msg="CreateContainer within sandbox \"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 17:42:52.304300 containerd[2017]: time="2024-11-12T17:42:52.303797754Z" level=info msg="CreateContainer within sandbox \"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"631bd7e3412b1b6832bc90f6b3fff7712581c3d33928733e71ce841e17f7f007\"" Nov 12 17:42:52.309972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount669028648.mount: Deactivated successfully. Nov 12 17:42:52.319314 containerd[2017]: time="2024-11-12T17:42:52.314639202Z" level=info msg="StartContainer for \"631bd7e3412b1b6832bc90f6b3fff7712581c3d33928733e71ce841e17f7f007\"" Nov 12 17:42:52.369688 kubelet[3479]: I1112 17:42:52.369633 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:42:52.451441 systemd[1]: Started cri-containerd-631bd7e3412b1b6832bc90f6b3fff7712581c3d33928733e71ce841e17f7f007.scope - libcontainer container 631bd7e3412b1b6832bc90f6b3fff7712581c3d33928733e71ce841e17f7f007. Nov 12 17:42:52.520793 ntpd[1988]: Listen normally on 7 vxlan.calico 192.168.122.128:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 7 vxlan.calico 192.168.122.128:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 8 vxlan.calico [fe80::64d1:51ff:fea6:11d4%4]:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 9 caliee0c0d8b1d0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 10 calife98935dc8b [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 11 cali66b4c95d73f [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 12 cali39cdcaf8b6a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 13 caliec6c29922f3 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 17:42:52.523182 ntpd[1988]: 12 Nov 17:42:52 ntpd[1988]: Listen normally on 14 calia02bfb7bc1c [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 17:42:52.521560 ntpd[1988]: Listen normally on 8 vxlan.calico [fe80::64d1:51ff:fea6:11d4%4]:123 Nov 12 17:42:52.521650 ntpd[1988]: Listen normally on 9 caliee0c0d8b1d0 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 12 17:42:52.521717 ntpd[1988]: Listen normally on 10 calife98935dc8b [fe80::ecee:eeff:feee:eeee%8]:123 Nov 12 17:42:52.521783 ntpd[1988]: Listen normally on 11 cali66b4c95d73f [fe80::ecee:eeff:feee:eeee%9]:123 Nov 12 17:42:52.521877 ntpd[1988]: Listen normally on 12 cali39cdcaf8b6a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 12 17:42:52.521980 ntpd[1988]: Listen normally on 13 caliec6c29922f3 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 12 17:42:52.522047 ntpd[1988]: Listen normally on 14 calia02bfb7bc1c [fe80::ecee:eeff:feee:eeee%12]:123 Nov 12 17:42:52.537153 containerd[2017]: time="2024-11-12T17:42:52.537055171Z" level=info msg="StartContainer for \"631bd7e3412b1b6832bc90f6b3fff7712581c3d33928733e71ce841e17f7f007\" returns successfully" Nov 12 17:42:52.625996 containerd[2017]: time="2024-11-12T17:42:52.625889588Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:52.630152 containerd[2017]: time="2024-11-12T17:42:52.630057308Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 17:42:52.649538 containerd[2017]: time="2024-11-12T17:42:52.649445528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"40646891\" in 395.571866ms" Nov 12 17:42:52.649716 containerd[2017]: time="2024-11-12T17:42:52.649534736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:b16306569228fc9acacae1651e8a53108048968f1d86448e39eac75a80149d63\"" Nov 12 17:42:52.657234 containerd[2017]: time="2024-11-12T17:42:52.657021380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 17:42:52.675147 containerd[2017]: time="2024-11-12T17:42:52.671147216Z" level=info msg="CreateContainer within sandbox \"b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 17:42:52.726082 containerd[2017]: time="2024-11-12T17:42:52.723399440Z" level=info msg="CreateContainer within sandbox \"b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a7e798fbc2530354383260495faa0080abb0e9f47466263db8ece3616f0c1528\"" Nov 12 17:42:52.726082 containerd[2017]: time="2024-11-12T17:42:52.724524680Z" level=info msg="StartContainer for \"a7e798fbc2530354383260495faa0080abb0e9f47466263db8ece3616f0c1528\"" Nov 12 17:42:52.856644 systemd[1]: Started cri-containerd-a7e798fbc2530354383260495faa0080abb0e9f47466263db8ece3616f0c1528.scope - libcontainer container a7e798fbc2530354383260495faa0080abb0e9f47466263db8ece3616f0c1528. Nov 12 17:42:52.998619 containerd[2017]: time="2024-11-12T17:42:52.996701961Z" level=info msg="StartContainer for \"a7e798fbc2530354383260495faa0080abb0e9f47466263db8ece3616f0c1528\" returns successfully" Nov 12 17:42:53.409282 kubelet[3479]: I1112 17:42:53.409107 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94db85cfb-bhh4t" podStartSLOduration=31.242778552 podStartE2EDuration="37.409062751s" podCreationTimestamp="2024-11-12 17:42:16 +0000 UTC" firstStartedPulling="2024-11-12 17:42:46.487254769 +0000 UTC m=+53.128326361" lastFinishedPulling="2024-11-12 17:42:52.65353898 +0000 UTC m=+59.294610560" observedRunningTime="2024-11-12 17:42:53.406343419 +0000 UTC m=+60.047415023" watchObservedRunningTime="2024-11-12 17:42:53.409062751 +0000 UTC m=+60.050134343" Nov 12 17:42:53.410663 kubelet[3479]: I1112 17:42:53.410464 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-94db85cfb-5zcvt" podStartSLOduration=31.220346233 podStartE2EDuration="37.409670935s" podCreationTimestamp="2024-11-12 17:42:16 +0000 UTC" firstStartedPulling="2024-11-12 17:42:44.18369907 +0000 UTC m=+50.824770662" lastFinishedPulling="2024-11-12 17:42:50.373023772 +0000 UTC m=+57.014095364" observedRunningTime="2024-11-12 17:42:51.399082949 +0000 UTC m=+58.040154565" watchObservedRunningTime="2024-11-12 17:42:53.409670935 +0000 UTC m=+60.050742539" Nov 12 17:42:53.633061 containerd[2017]: time="2024-11-12T17:42:53.631752093Z" level=info msg="StopPodSandbox for \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\"" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.795 [WARNING][5771] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0", GenerateName:"calico-kube-controllers-6b45568475-", Namespace:"calico-system", SelfLink:"", UID:"9324c64d-81ba-42f9-be40-b1e24856c873", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b45568475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98", Pod:"calico-kube-controllers-6b45568475-v2mpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia02bfb7bc1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.797 [INFO][5771] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.797 [INFO][5771] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" iface="eth0" netns="" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.797 [INFO][5771] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.798 [INFO][5771] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.985 [INFO][5777] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.985 [INFO][5777] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:53.985 [INFO][5777] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:54.000 [WARNING][5777] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:54.000 [INFO][5777] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:54.003 [INFO][5777] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.013317 containerd[2017]: 2024-11-12 17:42:54.007 [INFO][5771] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.015702 containerd[2017]: time="2024-11-12T17:42:54.013717674Z" level=info msg="TearDown network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\" successfully" Nov 12 17:42:54.015702 containerd[2017]: time="2024-11-12T17:42:54.014781486Z" level=info msg="StopPodSandbox for \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\" returns successfully" Nov 12 17:42:54.019631 containerd[2017]: time="2024-11-12T17:42:54.019395114Z" level=info msg="RemovePodSandbox for \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\"" Nov 12 17:42:54.021224 containerd[2017]: time="2024-11-12T17:42:54.020088078Z" level=info msg="Forcibly stopping sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\"" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.157 [WARNING][5795] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0", GenerateName:"calico-kube-controllers-6b45568475-", Namespace:"calico-system", SelfLink:"", UID:"9324c64d-81ba-42f9-be40-b1e24856c873", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b45568475", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98", Pod:"calico-kube-controllers-6b45568475-v2mpw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia02bfb7bc1c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.157 [INFO][5795] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.159 [INFO][5795] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" iface="eth0" netns="" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.159 [INFO][5795] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.159 [INFO][5795] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.223 [INFO][5801] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.223 [INFO][5801] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.223 [INFO][5801] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.241 [WARNING][5801] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.241 [INFO][5801] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" HandleID="k8s-pod-network.f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Workload="ip--172--31--24--62-k8s-calico--kube--controllers--6b45568475--v2mpw-eth0" Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.244 [INFO][5801] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.255121 containerd[2017]: 2024-11-12 17:42:54.250 [INFO][5795] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810" Nov 12 17:42:54.258191 containerd[2017]: time="2024-11-12T17:42:54.255180560Z" level=info msg="TearDown network for sandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\" successfully" Nov 12 17:42:54.263151 containerd[2017]: time="2024-11-12T17:42:54.263070380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:54.264074 containerd[2017]: time="2024-11-12T17:42:54.263864312Z" level=info msg="RemovePodSandbox \"f2c7e819738404e62847c564984fba341ac4ef0431f6e9f8b817b28022894810\" returns successfully" Nov 12 17:42:54.268278 containerd[2017]: time="2024-11-12T17:42:54.267084020Z" level=info msg="StopPodSandbox for \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\"" Nov 12 17:42:54.400155 kubelet[3479]: I1112 17:42:54.399409 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.374 [WARNING][5821] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f519653-0fdb-4b0c-9b62-0fc9a35c6511", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498", Pod:"calico-apiserver-94db85cfb-5zcvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee0c0d8b1d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.374 [INFO][5821] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.374 [INFO][5821] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" iface="eth0" netns="" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.374 [INFO][5821] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.374 [INFO][5821] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.544 [INFO][5827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.545 [INFO][5827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.552 [INFO][5827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.577 [WARNING][5827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.577 [INFO][5827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.582 [INFO][5827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.597991 containerd[2017]: 2024-11-12 17:42:54.587 [INFO][5821] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.599764 containerd[2017]: time="2024-11-12T17:42:54.599016873Z" level=info msg="TearDown network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\" successfully" Nov 12 17:42:54.599764 containerd[2017]: time="2024-11-12T17:42:54.599059449Z" level=info msg="StopPodSandbox for \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\" returns successfully" Nov 12 17:42:54.602369 containerd[2017]: time="2024-11-12T17:42:54.600839337Z" level=info msg="RemovePodSandbox for \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\"" Nov 12 17:42:54.602369 containerd[2017]: time="2024-11-12T17:42:54.601960521Z" level=info msg="Forcibly stopping sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\"" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.741 [WARNING][5850] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f519653-0fdb-4b0c-9b62-0fc9a35c6511", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"04bf3e317ee45dcbea9c5ee640dde44264587d801d0bc3f663d14e37a01bc498", Pod:"calico-apiserver-94db85cfb-5zcvt", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliee0c0d8b1d0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.742 [INFO][5850] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.742 [INFO][5850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" iface="eth0" netns="" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.742 [INFO][5850] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.742 [INFO][5850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.839 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.839 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.839 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.869 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.871 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" HandleID="k8s-pod-network.cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--5zcvt-eth0" Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.881 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:54.893290 containerd[2017]: 2024-11-12 17:42:54.885 [INFO][5850] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb" Nov 12 17:42:54.906591 containerd[2017]: time="2024-11-12T17:42:54.898285823Z" level=info msg="TearDown network for sandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\" successfully" Nov 12 17:42:54.919831 containerd[2017]: time="2024-11-12T17:42:54.919059395Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:54.919831 containerd[2017]: time="2024-11-12T17:42:54.919220783Z" level=info msg="RemovePodSandbox \"cf5e9ebdf416fcd5d3b335b3589f2492c6eef4d172c9816e6a6131a6791a41eb\" returns successfully" Nov 12 17:42:54.934141 containerd[2017]: time="2024-11-12T17:42:54.932446427Z" level=info msg="StopPodSandbox for \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\"" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.116 [WARNING][5874] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2856c5dd-06c7-4491-84ce-4e60da83a6ac", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e", Pod:"coredns-7db6d8ff4d-4twwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66b4c95d73f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.117 [INFO][5874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.117 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" iface="eth0" netns="" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.117 [INFO][5874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.117 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.225 [INFO][5881] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.227 [INFO][5881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.227 [INFO][5881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.263 [WARNING][5881] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.266 [INFO][5881] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.280 [INFO][5881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:55.298122 containerd[2017]: 2024-11-12 17:42:55.288 [INFO][5874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.302975 containerd[2017]: time="2024-11-12T17:42:55.302884413Z" level=info msg="TearDown network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\" successfully" Nov 12 17:42:55.303852 containerd[2017]: time="2024-11-12T17:42:55.303660873Z" level=info msg="StopPodSandbox for \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\" returns successfully" Nov 12 17:42:55.305162 containerd[2017]: time="2024-11-12T17:42:55.304657185Z" level=info msg="RemovePodSandbox for \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\"" Nov 12 17:42:55.305162 containerd[2017]: time="2024-11-12T17:42:55.304717713Z" level=info msg="Forcibly stopping sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\"" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.582 [WARNING][5903] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"2856c5dd-06c7-4491-84ce-4e60da83a6ac", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"a612188f632b888a11da6e12f7e240f36ebba5fbca48ec0883d057cfc74f4c2e", Pod:"coredns-7db6d8ff4d-4twwd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali66b4c95d73f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.586 [INFO][5903] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.586 [INFO][5903] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" iface="eth0" netns="" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.586 [INFO][5903] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.586 [INFO][5903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.733 [INFO][5909] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.733 [INFO][5909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.733 [INFO][5909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.757 [WARNING][5909] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.757 [INFO][5909] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" HandleID="k8s-pod-network.67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--4twwd-eth0" Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.761 [INFO][5909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:55.781989 containerd[2017]: 2024-11-12 17:42:55.766 [INFO][5903] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd" Nov 12 17:42:55.781989 containerd[2017]: time="2024-11-12T17:42:55.780065819Z" level=info msg="TearDown network for sandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\" successfully" Nov 12 17:42:55.799770 containerd[2017]: time="2024-11-12T17:42:55.799637123Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:55.800084 containerd[2017]: time="2024-11-12T17:42:55.799821863Z" level=info msg="RemovePodSandbox \"67fa5b15c12f0dd5dc5077133443b856b19cce6acb690bc4a064c18198f8d7cd\" returns successfully" Nov 12 17:42:55.806102 containerd[2017]: time="2024-11-12T17:42:55.803699471Z" level=info msg="StopPodSandbox for \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\"" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.136 [WARNING][5928] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9", Pod:"coredns-7db6d8ff4d-lczwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec6c29922f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.140 [INFO][5928] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.140 [INFO][5928] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" iface="eth0" netns="" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.142 [INFO][5928] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.142 [INFO][5928] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.313 [INFO][5935] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.316 [INFO][5935] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.316 [INFO][5935] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.339 [WARNING][5935] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.339 [INFO][5935] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.353 [INFO][5935] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:56.368992 containerd[2017]: 2024-11-12 17:42:56.359 [INFO][5928] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.371643 containerd[2017]: time="2024-11-12T17:42:56.369062770Z" level=info msg="TearDown network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\" successfully" Nov 12 17:42:56.371643 containerd[2017]: time="2024-11-12T17:42:56.369111190Z" level=info msg="StopPodSandbox for \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\" returns successfully" Nov 12 17:42:56.371643 containerd[2017]: time="2024-11-12T17:42:56.370043302Z" level=info msg="RemovePodSandbox for \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\"" Nov 12 17:42:56.371643 containerd[2017]: time="2024-11-12T17:42:56.370089754Z" level=info msg="Forcibly stopping sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\"" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.587 [WARNING][5954] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e6b92c0e-f7a3-46f3-a24e-5d039c6a77d5", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"d11db958bc6c5a46b08a530bcfccb19e6944602c9442a84137fc573fdd75dfb9", Pod:"coredns-7db6d8ff4d-lczwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliec6c29922f3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.588 [INFO][5954] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.588 [INFO][5954] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" iface="eth0" netns="" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.588 [INFO][5954] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.588 [INFO][5954] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.735 [INFO][5961] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.736 [INFO][5961] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.736 [INFO][5961] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.794 [WARNING][5961] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.798 [INFO][5961] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" HandleID="k8s-pod-network.9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Workload="ip--172--31--24--62-k8s-coredns--7db6d8ff4d--lczwt-eth0" Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.804 [INFO][5961] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:56.829314 containerd[2017]: 2024-11-12 17:42:56.817 [INFO][5954] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8" Nov 12 17:42:56.831238 containerd[2017]: time="2024-11-12T17:42:56.829515072Z" level=info msg="TearDown network for sandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\" successfully" Nov 12 17:42:56.843183 containerd[2017]: time="2024-11-12T17:42:56.842616252Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:56.843183 containerd[2017]: time="2024-11-12T17:42:56.842746524Z" level=info msg="RemovePodSandbox \"9b93d86caa1e0ce9e2655e38d2f17710931eecfe42b4369b80a033b7d0251ae8\" returns successfully" Nov 12 17:42:56.848226 containerd[2017]: time="2024-11-12T17:42:56.846289296Z" level=info msg="StopPodSandbox for \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\"" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.079 [WARNING][5979] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c76a9e4-323d-472f-a6f5-2dfa31d17b05", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334", Pod:"calico-apiserver-94db85cfb-bhh4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cdcaf8b6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.079 [INFO][5979] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.079 [INFO][5979] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" iface="eth0" netns="" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.079 [INFO][5979] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.079 [INFO][5979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.169 [INFO][5986] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.169 [INFO][5986] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.169 [INFO][5986] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.204 [WARNING][5986] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.204 [INFO][5986] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.214 [INFO][5986] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:57.232195 containerd[2017]: 2024-11-12 17:42:57.225 [INFO][5979] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.232195 containerd[2017]: time="2024-11-12T17:42:57.231275578Z" level=info msg="TearDown network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\" successfully" Nov 12 17:42:57.240988 containerd[2017]: time="2024-11-12T17:42:57.231380650Z" level=info msg="StopPodSandbox for \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\" returns successfully" Nov 12 17:42:57.240988 containerd[2017]: time="2024-11-12T17:42:57.238497262Z" level=info msg="RemovePodSandbox for \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\"" Nov 12 17:42:57.240988 containerd[2017]: time="2024-11-12T17:42:57.238574242Z" level=info msg="Forcibly stopping sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\"" Nov 12 17:42:57.258951 systemd[1]: Started sshd@11-172.31.24.62:22-139.178.89.65:40610.service - OpenSSH per-connection server daemon (139.178.89.65:40610). Nov 12 17:42:57.512414 sshd[5995]: Accepted publickey for core from 139.178.89.65 port 40610 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:57.521594 sshd[5995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:57.549459 systemd-logind[1993]: New session 12 of user core. Nov 12 17:42:57.565015 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 17:42:57.720297 containerd[2017]: time="2024-11-12T17:42:57.718677781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:57.722328 containerd[2017]: time="2024-11-12T17:42:57.722269777Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=31961371" Nov 12 17:42:57.724515 containerd[2017]: time="2024-11-12T17:42:57.724452673Z" level=info msg="ImageCreate event name:\"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.484 [WARNING][6007] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0", GenerateName:"calico-apiserver-94db85cfb-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c76a9e4-323d-472f-a6f5-2dfa31d17b05", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"94db85cfb", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"b86337e2c898e15107d8c0e73cb814dd2237de39a083ed6cf7297ba7ee765334", Pod:"calico-apiserver-94db85cfb-bhh4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39cdcaf8b6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.485 [INFO][6007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.489 [INFO][6007] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" iface="eth0" netns="" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.489 [INFO][6007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.489 [INFO][6007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.665 [INFO][6014] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.667 [INFO][6014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.667 [INFO][6014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.704 [WARNING][6014] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.704 [INFO][6014] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" HandleID="k8s-pod-network.e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Workload="ip--172--31--24--62-k8s-calico--apiserver--94db85cfb--bhh4t-eth0" Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.715 [INFO][6014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:57.737044 containerd[2017]: 2024-11-12 17:42:57.728 [INFO][6007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5" Nov 12 17:42:57.737824 containerd[2017]: time="2024-11-12T17:42:57.737087065Z" level=info msg="TearDown network for sandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\" successfully" Nov 12 17:42:57.746079 containerd[2017]: time="2024-11-12T17:42:57.745804285Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:57.752180 containerd[2017]: time="2024-11-12T17:42:57.751506313Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"33330975\" in 5.094199309s" Nov 12 17:42:57.752180 containerd[2017]: time="2024-11-12T17:42:57.751600849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:526584192bc71f907fcb2d2ef01be0c760fee2ab7bb1e05e41ad9ade98a986b3\"" Nov 12 17:42:57.763211 containerd[2017]: time="2024-11-12T17:42:57.762969805Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 17:42:57.792774 containerd[2017]: time="2024-11-12T17:42:57.792268825Z" level=info msg="CreateContainer within sandbox \"eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 17:42:57.797468 containerd[2017]: time="2024-11-12T17:42:57.796986529Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:57.798519 containerd[2017]: time="2024-11-12T17:42:57.798420277Z" level=info msg="RemovePodSandbox \"e49342a7fa78e5309ca52d01b00f01b3122d8cdf22f9128c2a956e4aad2fdba5\" returns successfully" Nov 12 17:42:57.805827 containerd[2017]: time="2024-11-12T17:42:57.805573945Z" level=info msg="StopPodSandbox for \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\"" Nov 12 17:42:57.840796 containerd[2017]: time="2024-11-12T17:42:57.840724345Z" level=info msg="CreateContainer within sandbox \"eeee31dfb42eed24b15ff240e407c1fc82e264f2fddcb97ab974d68137b79c98\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9eb9cb2c9d2ddb2eeede351e34b5fad4cb02d68234bf494eded96a98236d3fbc\"" Nov 12 17:42:57.857047 containerd[2017]: time="2024-11-12T17:42:57.848858977Z" level=info msg="StartContainer for \"9eb9cb2c9d2ddb2eeede351e34b5fad4cb02d68234bf494eded96a98236d3fbc\"" Nov 12 17:42:58.050256 systemd[1]: Started cri-containerd-9eb9cb2c9d2ddb2eeede351e34b5fad4cb02d68234bf494eded96a98236d3fbc.scope - libcontainer container 9eb9cb2c9d2ddb2eeede351e34b5fad4cb02d68234bf494eded96a98236d3fbc. Nov 12 17:42:58.159253 sshd[5995]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:58.172781 systemd[1]: sshd@11-172.31.24.62:22-139.178.89.65:40610.service: Deactivated successfully. Nov 12 17:42:58.183510 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 17:42:58.190317 systemd-logind[1993]: Session 12 logged out. Waiting for processes to exit. Nov 12 17:42:58.231058 systemd[1]: Started sshd@12-172.31.24.62:22-139.178.89.65:40626.service - OpenSSH per-connection server daemon (139.178.89.65:40626). Nov 12 17:42:58.246455 systemd-logind[1993]: Removed session 12. Nov 12 17:42:58.284617 containerd[2017]: time="2024-11-12T17:42:58.284541840Z" level=info msg="StartContainer for \"9eb9cb2c9d2ddb2eeede351e34b5fad4cb02d68234bf494eded96a98236d3fbc\" returns successfully" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.090 [WARNING][6046] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"272a2aec-8f98-4451-9782-58222f5f8977", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086", Pod:"csi-node-driver-qnp2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife98935dc8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.092 [INFO][6046] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.092 [INFO][6046] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" iface="eth0" netns="" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.092 [INFO][6046] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.092 [INFO][6046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.218 [INFO][6076] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.218 [INFO][6076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.218 [INFO][6076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.269 [WARNING][6076] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.270 [INFO][6076] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.277 [INFO][6076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:58.297421 containerd[2017]: 2024-11-12 17:42:58.292 [INFO][6046] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.303449 containerd[2017]: time="2024-11-12T17:42:58.303203904Z" level=info msg="TearDown network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\" successfully" Nov 12 17:42:58.303449 containerd[2017]: time="2024-11-12T17:42:58.303260520Z" level=info msg="StopPodSandbox for \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\" returns successfully" Nov 12 17:42:58.307174 containerd[2017]: time="2024-11-12T17:42:58.305098656Z" level=info msg="RemovePodSandbox for \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\"" Nov 12 17:42:58.307174 containerd[2017]: time="2024-11-12T17:42:58.305191092Z" level=info msg="Forcibly stopping sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\"" Nov 12 17:42:58.538185 sshd[6091]: Accepted publickey for core from 139.178.89.65 port 40626 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:58.546472 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:58.573050 systemd-logind[1993]: New session 13 of user core. Nov 12 17:42:58.581704 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.516 [WARNING][6111] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"272a2aec-8f98-4451-9782-58222f5f8977", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 17, 42, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-24-62", ContainerID:"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086", Pod:"csi-node-driver-qnp2k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calife98935dc8b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.523 [INFO][6111] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.524 [INFO][6111] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" iface="eth0" netns="" Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.524 [INFO][6111] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.525 [INFO][6111] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.765 [INFO][6118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.770 [INFO][6118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.776 [INFO][6118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.826 [WARNING][6118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.826 [INFO][6118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" HandleID="k8s-pod-network.21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Workload="ip--172--31--24--62-k8s-csi--node--driver--qnp2k-eth0" Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.838 [INFO][6118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 17:42:58.864070 containerd[2017]: 2024-11-12 17:42:58.854 [INFO][6111] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507" Nov 12 17:42:58.864070 containerd[2017]: time="2024-11-12T17:42:58.862480730Z" level=info msg="TearDown network for sandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\" successfully" Nov 12 17:42:58.871515 containerd[2017]: time="2024-11-12T17:42:58.871315419Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 17:42:58.871876 containerd[2017]: time="2024-11-12T17:42:58.871595655Z" level=info msg="RemovePodSandbox \"21b1accba3b7fcba1abf6514f623d594b7ccce3b89ea9760900f5b5c31ef5507\" returns successfully" Nov 12 17:42:59.075329 sshd[6091]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:59.092736 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 17:42:59.094427 systemd[1]: sshd@12-172.31.24.62:22-139.178.89.65:40626.service: Deactivated successfully. Nov 12 17:42:59.104574 systemd-logind[1993]: Session 13 logged out. Waiting for processes to exit. Nov 12 17:42:59.131393 systemd[1]: Started sshd@13-172.31.24.62:22-139.178.89.65:40634.service - OpenSSH per-connection server daemon (139.178.89.65:40634). Nov 12 17:42:59.141532 systemd-logind[1993]: Removed session 13. Nov 12 17:42:59.333561 kubelet[3479]: I1112 17:42:59.333182 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b45568475-v2mpw" podStartSLOduration=32.336819376 podStartE2EDuration="41.333146665s" podCreationTimestamp="2024-11-12 17:42:18 +0000 UTC" firstStartedPulling="2024-11-12 17:42:48.762466036 +0000 UTC m=+55.403537628" lastFinishedPulling="2024-11-12 17:42:57.758793325 +0000 UTC m=+64.399864917" observedRunningTime="2024-11-12 17:42:58.601833373 +0000 UTC m=+65.242905025" watchObservedRunningTime="2024-11-12 17:42:59.333146665 +0000 UTC m=+65.974218281" Nov 12 17:42:59.379462 sshd[6160]: Accepted publickey for core from 139.178.89.65 port 40634 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:42:59.384514 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:42:59.405615 systemd-logind[1993]: New session 14 of user core. Nov 12 17:42:59.417545 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 17:42:59.865017 sshd[6160]: pam_unix(sshd:session): session closed for user core Nov 12 17:42:59.878128 systemd[1]: sshd@13-172.31.24.62:22-139.178.89.65:40634.service: Deactivated successfully. Nov 12 17:42:59.885349 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 17:42:59.888323 systemd-logind[1993]: Session 14 logged out. Waiting for processes to exit. Nov 12 17:42:59.892490 systemd-logind[1993]: Removed session 14. Nov 12 17:42:59.922153 containerd[2017]: time="2024-11-12T17:42:59.922013860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:59.924764 containerd[2017]: time="2024-11-12T17:42:59.924575128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=9883360" Nov 12 17:42:59.927281 containerd[2017]: time="2024-11-12T17:42:59.927049708Z" level=info msg="ImageCreate event name:\"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:59.937636 containerd[2017]: time="2024-11-12T17:42:59.937484020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 17:42:59.939712 containerd[2017]: time="2024-11-12T17:42:59.939479644Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11252948\" in 2.176030439s" Nov 12 17:42:59.939712 containerd[2017]: time="2024-11-12T17:42:59.939557656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:fe02b0a9952e3e3b3828f30f55de14ed8db1a2c781e5563c5c70e2a748e28486\"" Nov 12 17:42:59.948275 containerd[2017]: time="2024-11-12T17:42:59.948179500Z" level=info msg="CreateContainer within sandbox \"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 17:42:59.992139 containerd[2017]: time="2024-11-12T17:42:59.991888420Z" level=info msg="CreateContainer within sandbox \"d343fb4c55a30741fe7211dc2f5fc67877975e478cdc0d7e7198c0874c464086\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"012da4d4852120942319d628af87b289d2d90d02c54a7ace407f9e63976499bd\"" Nov 12 17:42:59.997265 containerd[2017]: time="2024-11-12T17:42:59.997124104Z" level=info msg="StartContainer for \"012da4d4852120942319d628af87b289d2d90d02c54a7ace407f9e63976499bd\"" Nov 12 17:43:00.075548 systemd[1]: Started cri-containerd-012da4d4852120942319d628af87b289d2d90d02c54a7ace407f9e63976499bd.scope - libcontainer container 012da4d4852120942319d628af87b289d2d90d02c54a7ace407f9e63976499bd. Nov 12 17:43:00.152997 containerd[2017]: time="2024-11-12T17:43:00.152714125Z" level=info msg="StartContainer for \"012da4d4852120942319d628af87b289d2d90d02c54a7ace407f9e63976499bd\" returns successfully" Nov 12 17:43:00.550823 kubelet[3479]: I1112 17:43:00.549160 3479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qnp2k" podStartSLOduration=26.880854457 podStartE2EDuration="42.549135243s" podCreationTimestamp="2024-11-12 17:42:18 +0000 UTC" firstStartedPulling="2024-11-12 17:42:44.273747322 +0000 UTC m=+50.914818914" lastFinishedPulling="2024-11-12 17:42:59.94202812 +0000 UTC m=+66.583099700" observedRunningTime="2024-11-12 17:43:00.548293395 +0000 UTC m=+67.189364999" watchObservedRunningTime="2024-11-12 17:43:00.549135243 +0000 UTC m=+67.190206835" Nov 12 17:43:00.934751 kubelet[3479]: I1112 17:43:00.934691 3479 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 17:43:00.934751 kubelet[3479]: I1112 17:43:00.934747 3479 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 17:43:04.910556 systemd[1]: Started sshd@14-172.31.24.62:22-139.178.89.65:40650.service - OpenSSH per-connection server daemon (139.178.89.65:40650). Nov 12 17:43:05.131451 sshd[6235]: Accepted publickey for core from 139.178.89.65 port 40650 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:05.137194 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:05.150962 systemd-logind[1993]: New session 15 of user core. Nov 12 17:43:05.158237 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 17:43:05.473253 sshd[6235]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:05.483601 systemd[1]: sshd@14-172.31.24.62:22-139.178.89.65:40650.service: Deactivated successfully. Nov 12 17:43:05.492476 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 17:43:05.496810 systemd-logind[1993]: Session 15 logged out. Waiting for processes to exit. Nov 12 17:43:05.503159 systemd-logind[1993]: Removed session 15. Nov 12 17:43:09.312968 kubelet[3479]: I1112 17:43:09.312413 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:43:10.517541 systemd[1]: Started sshd@15-172.31.24.62:22-139.178.89.65:46944.service - OpenSSH per-connection server daemon (139.178.89.65:46944). Nov 12 17:43:10.709141 sshd[6282]: Accepted publickey for core from 139.178.89.65 port 46944 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:10.713720 sshd[6282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:10.728120 systemd-logind[1993]: New session 16 of user core. Nov 12 17:43:10.738196 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 17:43:11.023132 sshd[6282]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:11.038388 systemd[1]: sshd@15-172.31.24.62:22-139.178.89.65:46944.service: Deactivated successfully. Nov 12 17:43:11.046810 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 17:43:11.051320 systemd-logind[1993]: Session 16 logged out. Waiting for processes to exit. Nov 12 17:43:11.057129 systemd-logind[1993]: Removed session 16. Nov 12 17:43:16.076611 systemd[1]: Started sshd@16-172.31.24.62:22-139.178.89.65:46958.service - OpenSSH per-connection server daemon (139.178.89.65:46958). Nov 12 17:43:16.254799 sshd[6296]: Accepted publickey for core from 139.178.89.65 port 46958 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:16.260068 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:16.271259 systemd-logind[1993]: New session 17 of user core. Nov 12 17:43:16.278249 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 17:43:16.564987 sshd[6296]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:16.571720 systemd[1]: sshd@16-172.31.24.62:22-139.178.89.65:46958.service: Deactivated successfully. Nov 12 17:43:16.576582 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 17:43:16.579151 systemd-logind[1993]: Session 17 logged out. Waiting for processes to exit. Nov 12 17:43:16.585793 systemd-logind[1993]: Removed session 17. Nov 12 17:43:21.613350 systemd[1]: Started sshd@17-172.31.24.62:22-139.178.89.65:45662.service - OpenSSH per-connection server daemon (139.178.89.65:45662). Nov 12 17:43:21.805724 sshd[6315]: Accepted publickey for core from 139.178.89.65 port 45662 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:21.809827 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:21.821240 systemd-logind[1993]: New session 18 of user core. Nov 12 17:43:21.828741 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 17:43:22.179615 sshd[6315]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:22.186179 systemd[1]: sshd@17-172.31.24.62:22-139.178.89.65:45662.service: Deactivated successfully. Nov 12 17:43:22.191126 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 17:43:22.196229 systemd-logind[1993]: Session 18 logged out. Waiting for processes to exit. Nov 12 17:43:22.219395 systemd[1]: Started sshd@18-172.31.24.62:22-139.178.89.65:45676.service - OpenSSH per-connection server daemon (139.178.89.65:45676). Nov 12 17:43:22.224261 systemd-logind[1993]: Removed session 18. Nov 12 17:43:22.420667 sshd[6328]: Accepted publickey for core from 139.178.89.65 port 45676 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:22.424696 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:22.437530 systemd-logind[1993]: New session 19 of user core. Nov 12 17:43:22.443358 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 17:43:23.015390 sshd[6328]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:23.025517 systemd[1]: sshd@18-172.31.24.62:22-139.178.89.65:45676.service: Deactivated successfully. Nov 12 17:43:23.030875 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 17:43:23.033158 systemd-logind[1993]: Session 19 logged out. Waiting for processes to exit. Nov 12 17:43:23.054442 systemd[1]: Started sshd@19-172.31.24.62:22-139.178.89.65:45680.service - OpenSSH per-connection server daemon (139.178.89.65:45680). Nov 12 17:43:23.056803 systemd-logind[1993]: Removed session 19. Nov 12 17:43:23.234871 sshd[6339]: Accepted publickey for core from 139.178.89.65 port 45680 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:23.239031 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:23.250116 systemd-logind[1993]: New session 20 of user core. Nov 12 17:43:23.260542 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 17:43:27.196274 sshd[6339]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:27.201236 kubelet[3479]: I1112 17:43:27.200522 3479 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 17:43:27.215388 systemd[1]: sshd@19-172.31.24.62:22-139.178.89.65:45680.service: Deactivated successfully. Nov 12 17:43:27.234970 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 17:43:27.237046 systemd[1]: session-20.scope: Consumed 1.273s CPU time. Nov 12 17:43:27.262687 systemd-logind[1993]: Session 20 logged out. Waiting for processes to exit. Nov 12 17:43:27.272488 systemd[1]: Started sshd@20-172.31.24.62:22-139.178.89.65:51538.service - OpenSSH per-connection server daemon (139.178.89.65:51538). Nov 12 17:43:27.283380 systemd-logind[1993]: Removed session 20. Nov 12 17:43:27.500395 sshd[6356]: Accepted publickey for core from 139.178.89.65 port 51538 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:27.505489 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:27.517685 systemd-logind[1993]: New session 21 of user core. Nov 12 17:43:27.524461 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 17:43:28.146579 sshd[6356]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:28.159423 systemd[1]: sshd@20-172.31.24.62:22-139.178.89.65:51538.service: Deactivated successfully. Nov 12 17:43:28.159835 systemd-logind[1993]: Session 21 logged out. Waiting for processes to exit. Nov 12 17:43:28.165808 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 17:43:28.185186 systemd-logind[1993]: Removed session 21. Nov 12 17:43:28.192285 systemd[1]: Started sshd@21-172.31.24.62:22-139.178.89.65:51544.service - OpenSSH per-connection server daemon (139.178.89.65:51544). Nov 12 17:43:28.379306 sshd[6368]: Accepted publickey for core from 139.178.89.65 port 51544 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:28.382465 sshd[6368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:28.393032 systemd-logind[1993]: New session 22 of user core. Nov 12 17:43:28.401826 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 17:43:28.673520 sshd[6368]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:28.686707 systemd[1]: sshd@21-172.31.24.62:22-139.178.89.65:51544.service: Deactivated successfully. Nov 12 17:43:28.691635 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 17:43:28.696095 systemd-logind[1993]: Session 22 logged out. Waiting for processes to exit. Nov 12 17:43:28.699081 systemd-logind[1993]: Removed session 22. Nov 12 17:43:33.712472 systemd[1]: Started sshd@22-172.31.24.62:22-139.178.89.65:51548.service - OpenSSH per-connection server daemon (139.178.89.65:51548). Nov 12 17:43:33.896322 sshd[6406]: Accepted publickey for core from 139.178.89.65 port 51548 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:33.900344 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:33.913764 systemd-logind[1993]: New session 23 of user core. Nov 12 17:43:33.924572 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 17:43:34.206139 sshd[6406]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:34.215403 systemd[1]: sshd@22-172.31.24.62:22-139.178.89.65:51548.service: Deactivated successfully. Nov 12 17:43:34.221960 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 17:43:34.225827 systemd-logind[1993]: Session 23 logged out. Waiting for processes to exit. Nov 12 17:43:34.229179 systemd-logind[1993]: Removed session 23. Nov 12 17:43:39.252172 systemd[1]: Started sshd@23-172.31.24.62:22-139.178.89.65:48004.service - OpenSSH per-connection server daemon (139.178.89.65:48004). Nov 12 17:43:39.450802 sshd[6447]: Accepted publickey for core from 139.178.89.65 port 48004 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:39.452978 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:39.477271 systemd-logind[1993]: New session 24 of user core. Nov 12 17:43:39.485228 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 17:43:39.857001 sshd[6447]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:39.866092 systemd-logind[1993]: Session 24 logged out. Waiting for processes to exit. Nov 12 17:43:39.867759 systemd[1]: sshd@23-172.31.24.62:22-139.178.89.65:48004.service: Deactivated successfully. Nov 12 17:43:39.878226 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 17:43:39.886610 systemd-logind[1993]: Removed session 24. Nov 12 17:43:44.907763 systemd[1]: Started sshd@24-172.31.24.62:22-139.178.89.65:48018.service - OpenSSH per-connection server daemon (139.178.89.65:48018). Nov 12 17:43:45.125460 sshd[6460]: Accepted publickey for core from 139.178.89.65 port 48018 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:45.130782 sshd[6460]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:45.157919 systemd-logind[1993]: New session 25 of user core. Nov 12 17:43:45.169261 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 17:43:45.521201 sshd[6460]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:45.534177 systemd[1]: sshd@24-172.31.24.62:22-139.178.89.65:48018.service: Deactivated successfully. Nov 12 17:43:45.540795 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 17:43:45.549066 systemd-logind[1993]: Session 25 logged out. Waiting for processes to exit. Nov 12 17:43:45.553140 systemd-logind[1993]: Removed session 25. Nov 12 17:43:50.566492 systemd[1]: Started sshd@25-172.31.24.62:22-139.178.89.65:49112.service - OpenSSH per-connection server daemon (139.178.89.65:49112). Nov 12 17:43:50.763191 sshd[6475]: Accepted publickey for core from 139.178.89.65 port 49112 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:50.767277 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:50.778884 systemd-logind[1993]: New session 26 of user core. Nov 12 17:43:50.786224 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 17:43:51.083389 sshd[6475]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:51.092179 systemd[1]: sshd@25-172.31.24.62:22-139.178.89.65:49112.service: Deactivated successfully. Nov 12 17:43:51.092273 systemd-logind[1993]: Session 26 logged out. Waiting for processes to exit. Nov 12 17:43:51.102611 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 17:43:51.113400 systemd-logind[1993]: Removed session 26. Nov 12 17:43:56.127736 systemd[1]: Started sshd@26-172.31.24.62:22-139.178.89.65:49122.service - OpenSSH per-connection server daemon (139.178.89.65:49122). Nov 12 17:43:56.321151 sshd[6489]: Accepted publickey for core from 139.178.89.65 port 49122 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:43:56.324442 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:43:56.335831 systemd-logind[1993]: New session 27 of user core. Nov 12 17:43:56.344228 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 17:43:56.630643 sshd[6489]: pam_unix(sshd:session): session closed for user core Nov 12 17:43:56.637402 systemd[1]: sshd@26-172.31.24.62:22-139.178.89.65:49122.service: Deactivated successfully. Nov 12 17:43:56.644357 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 17:43:56.648013 systemd-logind[1993]: Session 27 logged out. Waiting for processes to exit. Nov 12 17:43:56.651373 systemd-logind[1993]: Removed session 27. Nov 12 17:44:00.139460 systemd[1]: run-containerd-runc-k8s.io-9eb9cb2c9d2ddb2eeede351e34b5fad4cb02d68234bf494eded96a98236d3fbc-runc.BhxNdU.mount: Deactivated successfully. Nov 12 17:44:01.673491 systemd[1]: Started sshd@27-172.31.24.62:22-139.178.89.65:42588.service - OpenSSH per-connection server daemon (139.178.89.65:42588). Nov 12 17:44:01.852823 sshd[6547]: Accepted publickey for core from 139.178.89.65 port 42588 ssh2: RSA SHA256:1a90X/uDC0ILhfMiA2YbbwEMVTxtJewsfiol0dYezPk Nov 12 17:44:01.856838 sshd[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 17:44:01.869850 systemd-logind[1993]: New session 28 of user core. Nov 12 17:44:01.882261 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 17:44:02.148172 sshd[6547]: pam_unix(sshd:session): session closed for user core Nov 12 17:44:02.155648 systemd[1]: sshd@27-172.31.24.62:22-139.178.89.65:42588.service: Deactivated successfully. Nov 12 17:44:02.161165 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 17:44:02.164892 systemd-logind[1993]: Session 28 logged out. Waiting for processes to exit. Nov 12 17:44:02.169348 systemd-logind[1993]: Removed session 28. Nov 12 17:44:05.095433 systemd[1]: run-containerd-runc-k8s.io-04bf7f46931e7a57623137dfcb12103cfd52fb655b0f0414defdf9e26a008660-runc.xH0ay2.mount: Deactivated successfully. Nov 12 17:44:15.983563 kubelet[3479]: E1112 17:44:15.983321 3479 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-62?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 12 17:44:16.103611 systemd[1]: cri-containerd-5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653.scope: Deactivated successfully. Nov 12 17:44:16.105932 systemd[1]: cri-containerd-5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653.scope: Consumed 7.869s CPU time. Nov 12 17:44:16.161826 containerd[2017]: time="2024-11-12T17:44:16.161665214Z" level=info msg="shim disconnected" id=5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653 namespace=k8s.io Nov 12 17:44:16.161826 containerd[2017]: time="2024-11-12T17:44:16.161804330Z" level=warning msg="cleaning up after shim disconnected" id=5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653 namespace=k8s.io Nov 12 17:44:16.161826 containerd[2017]: time="2024-11-12T17:44:16.161828558Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:44:16.169607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653-rootfs.mount: Deactivated successfully. Nov 12 17:44:16.230065 containerd[2017]: time="2024-11-12T17:44:16.229801923Z" level=warning msg="cleanup warnings time=\"2024-11-12T17:44:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Nov 12 17:44:16.506212 systemd[1]: cri-containerd-7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7.scope: Deactivated successfully. Nov 12 17:44:16.506713 systemd[1]: cri-containerd-7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7.scope: Consumed 5.733s CPU time, 22.3M memory peak, 0B memory swap peak. Nov 12 17:44:16.569165 containerd[2017]: time="2024-11-12T17:44:16.566343280Z" level=info msg="shim disconnected" id=7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7 namespace=k8s.io Nov 12 17:44:16.569165 containerd[2017]: time="2024-11-12T17:44:16.566465872Z" level=warning msg="cleaning up after shim disconnected" id=7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7 namespace=k8s.io Nov 12 17:44:16.569165 containerd[2017]: time="2024-11-12T17:44:16.566491120Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:44:16.572818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7-rootfs.mount: Deactivated successfully. Nov 12 17:44:16.831760 kubelet[3479]: I1112 17:44:16.831703 3479 scope.go:117] "RemoveContainer" containerID="5b8df466c955b99ea2a60abf92ecd33fa67b84424f6bd40dbe7b4ae8b82ad653" Nov 12 17:44:16.838256 kubelet[3479]: I1112 17:44:16.837204 3479 scope.go:117] "RemoveContainer" containerID="7694ced18a62c59f50f0837e9c5a8760e1b90d2e33c514f1215446065dd032f7" Nov 12 17:44:16.838555 containerd[2017]: time="2024-11-12T17:44:16.838178682Z" level=info msg="CreateContainer within sandbox \"bae47eee4f060c0818bb66b99ab44e4043f7b5366be583ef49ee39c9b5ffef84\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 12 17:44:16.845006 containerd[2017]: time="2024-11-12T17:44:16.844684806Z" level=info msg="CreateContainer within sandbox \"4291c177aeea1bb6c50fe3010cd6ad69b3063f847cdf341fa381ef71aec50dbf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 12 17:44:16.882592 containerd[2017]: time="2024-11-12T17:44:16.882217290Z" level=info msg="CreateContainer within sandbox \"bae47eee4f060c0818bb66b99ab44e4043f7b5366be583ef49ee39c9b5ffef84\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6c7e011e7625cc733985a16b7c7d3e31bd41dd495060026cd704b72d33154f66\"" Nov 12 17:44:16.883842 containerd[2017]: time="2024-11-12T17:44:16.883568790Z" level=info msg="StartContainer for \"6c7e011e7625cc733985a16b7c7d3e31bd41dd495060026cd704b72d33154f66\"" Nov 12 17:44:16.892401 containerd[2017]: time="2024-11-12T17:44:16.892284930Z" level=info msg="CreateContainer within sandbox \"4291c177aeea1bb6c50fe3010cd6ad69b3063f847cdf341fa381ef71aec50dbf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"884871352a7941c3d4fa82faaf5db122f60a859e5cfd25f0462cb61e05f764b6\"" Nov 12 17:44:16.894098 containerd[2017]: time="2024-11-12T17:44:16.893390418Z" level=info msg="StartContainer for \"884871352a7941c3d4fa82faaf5db122f60a859e5cfd25f0462cb61e05f764b6\"" Nov 12 17:44:16.967551 systemd[1]: Started cri-containerd-6c7e011e7625cc733985a16b7c7d3e31bd41dd495060026cd704b72d33154f66.scope - libcontainer container 6c7e011e7625cc733985a16b7c7d3e31bd41dd495060026cd704b72d33154f66. Nov 12 17:44:16.996553 systemd[1]: Started cri-containerd-884871352a7941c3d4fa82faaf5db122f60a859e5cfd25f0462cb61e05f764b6.scope - libcontainer container 884871352a7941c3d4fa82faaf5db122f60a859e5cfd25f0462cb61e05f764b6. Nov 12 17:44:17.059866 containerd[2017]: time="2024-11-12T17:44:17.059790651Z" level=info msg="StartContainer for \"6c7e011e7625cc733985a16b7c7d3e31bd41dd495060026cd704b72d33154f66\" returns successfully" Nov 12 17:44:17.097879 containerd[2017]: time="2024-11-12T17:44:17.097274043Z" level=info msg="StartContainer for \"884871352a7941c3d4fa82faaf5db122f60a859e5cfd25f0462cb61e05f764b6\" returns successfully" Nov 12 17:44:21.869713 systemd[1]: cri-containerd-e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d.scope: Deactivated successfully. Nov 12 17:44:21.872158 systemd[1]: cri-containerd-e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d.scope: Consumed 4.954s CPU time, 15.9M memory peak, 0B memory swap peak. Nov 12 17:44:21.924516 containerd[2017]: time="2024-11-12T17:44:21.924116711Z" level=info msg="shim disconnected" id=e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d namespace=k8s.io Nov 12 17:44:21.924516 containerd[2017]: time="2024-11-12T17:44:21.924227327Z" level=warning msg="cleaning up after shim disconnected" id=e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d namespace=k8s.io Nov 12 17:44:21.924516 containerd[2017]: time="2024-11-12T17:44:21.924249659Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 17:44:21.928573 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d-rootfs.mount: Deactivated successfully. Nov 12 17:44:22.887116 kubelet[3479]: I1112 17:44:22.886417 3479 scope.go:117] "RemoveContainer" containerID="e0ecaef73fb89e5168806f179c15c3e813b561e1ea86d86f2e59ab8d8d1e982d" Nov 12 17:44:22.892850 containerd[2017]: time="2024-11-12T17:44:22.892779960Z" level=info msg="CreateContainer within sandbox \"8c11381c758660deddec61d621ba94a93805dafff0f8502e84c8e473d840aea6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 12 17:44:22.925170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211168786.mount: Deactivated successfully. Nov 12 17:44:22.926603 containerd[2017]: time="2024-11-12T17:44:22.925526364Z" level=info msg="CreateContainer within sandbox \"8c11381c758660deddec61d621ba94a93805dafff0f8502e84c8e473d840aea6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"a2b835887652efa80a24b611087fdeaa3491066994f61a3a3e565f44819a929f\"" Nov 12 17:44:22.928493 containerd[2017]: time="2024-11-12T17:44:22.927852456Z" level=info msg="StartContainer for \"a2b835887652efa80a24b611087fdeaa3491066994f61a3a3e565f44819a929f\"" Nov 12 17:44:22.997239 systemd[1]: Started cri-containerd-a2b835887652efa80a24b611087fdeaa3491066994f61a3a3e565f44819a929f.scope - libcontainer container a2b835887652efa80a24b611087fdeaa3491066994f61a3a3e565f44819a929f. Nov 12 17:44:23.073074 containerd[2017]: time="2024-11-12T17:44:23.072802569Z" level=info msg="StartContainer for \"a2b835887652efa80a24b611087fdeaa3491066994f61a3a3e565f44819a929f\" returns successfully" Nov 12 17:44:25.985404 kubelet[3479]: E1112 17:44:25.984833 3479 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-62?timeout=10s\": context deadline exceeded" Nov 12 17:44:29.103850 systemd[1]: run-containerd-runc-k8s.io-9eb9cb2c9d2ddb2eeede351e34b5fad4cb02d68234bf494eded96a98236d3fbc-runc.3OEBjG.mount: Deactivated successfully. Nov 12 17:44:35.986557 kubelet[3479]: E1112 17:44:35.986189 3479 controller.go:195] "Failed to update lease" err="Put \"https://172.31.24.62:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-24-62?timeout=10s\": context deadline exceeded"