Aug 5 21:41:48.347322 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 21:41:48.347345 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:41:48.347353 kernel: KASLR enabled Aug 5 21:41:48.347361 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 5 21:41:48.347367 kernel: printk: bootconsole [pl11] enabled Aug 5 21:41:48.347373 kernel: efi: EFI v2.7 by EDK II Aug 5 21:41:48.347380 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Aug 5 21:41:48.347386 kernel: random: crng init done Aug 5 21:41:48.347392 kernel: ACPI: Early table checksum verification disabled Aug 5 21:41:48.347398 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Aug 5 21:41:48.347405 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347411 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347419 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 5 21:41:48.347425 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347433 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347439 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347447 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347455 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347461 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347468 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 5 21:41:48.347474 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347481 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 5 21:41:48.347487 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 5 21:41:48.347494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Aug 5 21:41:48.347500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Aug 5 21:41:48.347506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Aug 5 21:41:48.347513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Aug 5 21:41:48.347520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Aug 5 21:41:48.347528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Aug 5 21:41:48.347534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Aug 5 21:41:48.347541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Aug 5 21:41:48.347547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Aug 5 21:41:48.347553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Aug 5 21:41:48.347560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Aug 5 21:41:48.347566 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Aug 5 21:41:48.347573 kernel: Zone ranges: Aug 5 21:41:48.347579 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 5 21:41:48.347585 kernel: DMA32 empty Aug 5 21:41:48.347592 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 5 21:41:48.347600 kernel: Movable zone start for each node Aug 5 21:41:48.347609 kernel: Early memory node ranges Aug 5 21:41:48.347616 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 5 21:41:48.347623 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Aug 5 21:41:48.347630 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Aug 5 21:41:48.347638 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Aug 5 21:41:48.347645 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Aug 5 21:41:48.347651 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Aug 5 21:41:48.347658 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Aug 5 21:41:48.347665 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Aug 5 21:41:48.347671 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 5 21:41:48.347678 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 5 21:41:48.347685 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 5 21:41:48.347692 kernel: psci: probing for conduit method from ACPI. Aug 5 21:41:48.347698 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 21:41:48.347705 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:41:48.347712 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 5 21:41:48.347720 kernel: psci: SMC Calling Convention v1.4 Aug 5 21:41:48.347727 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Aug 5 21:41:48.347733 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Aug 5 21:41:48.347741 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:41:48.349436 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:41:48.349461 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 5 21:41:48.349468 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:41:48.349476 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:41:48.349483 kernel: CPU features: detected: Hardware dirty bit management Aug 5 21:41:48.349490 kernel: CPU features: detected: Spectre-BHB Aug 5 21:41:48.349497 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 21:41:48.349504 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 21:41:48.349516 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 21:41:48.349523 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 5 21:41:48.349530 kernel: alternatives: applying boot alternatives Aug 5 21:41:48.349538 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:41:48.349546 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:41:48.349554 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:41:48.349561 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:41:48.349568 kernel: Fallback order for Node 0: 0 Aug 5 21:41:48.349575 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 5 21:41:48.349582 kernel: Policy zone: Normal Aug 5 21:41:48.349591 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:41:48.349598 kernel: software IO TLB: area num 2. Aug 5 21:41:48.349605 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Aug 5 21:41:48.349612 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Aug 5 21:41:48.349619 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 21:41:48.349626 kernel: trace event string verifier disabled Aug 5 21:41:48.349633 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:41:48.349641 kernel: rcu: RCU event tracing is enabled. Aug 5 21:41:48.349648 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 21:41:48.349655 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:41:48.349663 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:41:48.349670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:41:48.349679 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 21:41:48.349686 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:41:48.349693 kernel: GICv3: 960 SPIs implemented Aug 5 21:41:48.349699 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:41:48.349706 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:41:48.349713 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 21:41:48.349720 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 5 21:41:48.349727 kernel: ITS: No ITS available, not enabling LPIs Aug 5 21:41:48.349734 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:41:48.349741 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:41:48.349762 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 21:41:48.349773 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 21:41:48.349780 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 21:41:48.349788 kernel: Console: colour dummy device 80x25 Aug 5 21:41:48.349795 kernel: printk: console [tty1] enabled Aug 5 21:41:48.349802 kernel: ACPI: Core revision 20230628 Aug 5 21:41:48.349809 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 21:41:48.349816 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:41:48.349823 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:41:48.349831 kernel: SELinux: Initializing. Aug 5 21:41:48.349838 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.349846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.349853 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:41:48.349860 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:41:48.349867 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 5 21:41:48.349874 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Aug 5 21:41:48.349882 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 5 21:41:48.349889 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:41:48.349903 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:41:48.349910 kernel: Remapping and enabling EFI services. Aug 5 21:41:48.349918 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:41:48.349925 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:41:48.349934 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 5 21:41:48.349942 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:41:48.349949 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 21:41:48.349957 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 21:41:48.349964 kernel: SMP: Total of 2 processors activated. Aug 5 21:41:48.349973 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:41:48.349981 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 5 21:41:48.349988 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 21:41:48.349996 kernel: CPU features: detected: CRC32 instructions Aug 5 21:41:48.350003 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 21:41:48.350010 kernel: CPU features: detected: LSE atomic instructions Aug 5 21:41:48.350018 kernel: CPU features: detected: Privileged Access Never Aug 5 21:41:48.350025 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:41:48.350032 kernel: alternatives: applying system-wide alternatives Aug 5 21:41:48.350041 kernel: devtmpfs: initialized Aug 5 21:41:48.350049 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:41:48.350057 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 21:41:48.350065 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:41:48.350072 kernel: SMBIOS 3.1.0 present. Aug 5 21:41:48.350080 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Aug 5 21:41:48.350088 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:41:48.350095 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:41:48.350103 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:41:48.350112 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:41:48.350119 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:41:48.350127 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Aug 5 21:41:48.350134 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:41:48.350142 kernel: cpuidle: using governor menu Aug 5 21:41:48.350149 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:41:48.350157 kernel: ASID allocator initialised with 32768 entries Aug 5 21:41:48.350165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:41:48.350172 kernel: Serial: AMBA PL011 UART driver Aug 5 21:41:48.350182 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 21:41:48.350189 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 21:41:48.350196 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:41:48.350204 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:41:48.350211 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:41:48.350219 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:41:48.350226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:41:48.350234 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:41:48.350241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:41:48.350250 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:41:48.350257 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:41:48.350265 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:41:48.350273 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:41:48.350280 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:41:48.350288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:41:48.350295 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:41:48.350303 kernel: ACPI: Interpreter enabled Aug 5 21:41:48.350310 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:41:48.350319 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 5 21:41:48.350327 kernel: printk: console [ttyAMA0] enabled Aug 5 21:41:48.350334 kernel: printk: bootconsole [pl11] disabled Aug 5 21:41:48.350342 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 5 21:41:48.350349 kernel: iommu: Default domain type: Translated Aug 5 21:41:48.350357 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:41:48.350365 kernel: efivars: Registered efivars operations Aug 5 21:41:48.350372 kernel: vgaarb: loaded Aug 5 21:41:48.350380 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:41:48.350388 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:41:48.350396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:41:48.350403 kernel: pnp: PnP ACPI init Aug 5 21:41:48.350410 kernel: pnp: PnP ACPI: found 0 devices Aug 5 21:41:48.350417 kernel: NET: Registered PF_INET protocol family Aug 5 21:41:48.350425 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:41:48.350433 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:41:48.350440 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:41:48.350447 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:41:48.350457 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:41:48.350464 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:41:48.350472 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.350479 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.350487 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:41:48.350494 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:41:48.350502 kernel: kvm [1]: HYP mode not available Aug 5 21:41:48.350509 kernel: Initialise system trusted keyrings Aug 5 21:41:48.350517 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:41:48.350526 kernel: Key type asymmetric registered Aug 5 21:41:48.350533 kernel: Asymmetric key parser 'x509' registered Aug 5 21:41:48.350541 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:41:48.350548 kernel: io scheduler mq-deadline registered Aug 5 21:41:48.350555 kernel: io scheduler kyber registered Aug 5 21:41:48.350563 kernel: io scheduler bfq registered Aug 5 21:41:48.350570 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:41:48.350577 kernel: thunder_xcv, ver 1.0 Aug 5 21:41:48.350585 kernel: thunder_bgx, ver 1.0 Aug 5 21:41:48.350592 kernel: nicpf, ver 1.0 Aug 5 21:41:48.350601 kernel: nicvf, ver 1.0 Aug 5 21:41:48.350765 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:41:48.350842 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:41:47 UTC (1722894107) Aug 5 21:41:48.350853 kernel: efifb: probing for efifb Aug 5 21:41:48.350861 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 5 21:41:48.350868 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 5 21:41:48.350876 kernel: efifb: scrolling: redraw Aug 5 21:41:48.350886 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 5 21:41:48.350894 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 21:41:48.350901 kernel: fb0: EFI VGA frame buffer device Aug 5 21:41:48.350908 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 5 21:41:48.350916 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:41:48.350923 kernel: No ACPI PMU IRQ for CPU0 Aug 5 21:41:48.350931 kernel: No ACPI PMU IRQ for CPU1 Aug 5 21:41:48.350938 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 5 21:41:48.350946 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:41:48.350955 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:41:48.350962 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:41:48.350969 kernel: Segment Routing with IPv6 Aug 5 21:41:48.350977 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:41:48.350985 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:41:48.350992 kernel: Key type dns_resolver registered Aug 5 21:41:48.350999 kernel: registered taskstats version 1 Aug 5 21:41:48.351007 kernel: Loading compiled-in X.509 certificates Aug 5 21:41:48.351014 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:41:48.351023 kernel: Key type .fscrypt registered Aug 5 21:41:48.351030 kernel: Key type fscrypt-provisioning registered Aug 5 21:41:48.351038 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:41:48.351045 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:41:48.351052 kernel: ima: No architecture policies found Aug 5 21:41:48.351060 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:41:48.351067 kernel: clk: Disabling unused clocks Aug 5 21:41:48.351075 kernel: Freeing unused kernel memory: 39040K Aug 5 21:41:48.351082 kernel: Run /init as init process Aug 5 21:41:48.351091 kernel: with arguments: Aug 5 21:41:48.351098 kernel: /init Aug 5 21:41:48.351105 kernel: with environment: Aug 5 21:41:48.351113 kernel: HOME=/ Aug 5 21:41:48.351120 kernel: TERM=linux Aug 5 21:41:48.351128 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:41:48.351137 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:41:48.351147 systemd[1]: Detected virtualization microsoft. Aug 5 21:41:48.351157 systemd[1]: Detected architecture arm64. Aug 5 21:41:48.351166 systemd[1]: Running in initrd. Aug 5 21:41:48.351174 systemd[1]: No hostname configured, using default hostname. Aug 5 21:41:48.351183 systemd[1]: Hostname set to . Aug 5 21:41:48.351191 systemd[1]: Initializing machine ID from random generator. Aug 5 21:41:48.351199 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:41:48.351207 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:41:48.351215 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:41:48.351225 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:41:48.351233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:41:48.351241 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:41:48.351250 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:41:48.351259 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:41:48.351267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:41:48.351275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:41:48.351285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:41:48.351293 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:41:48.351300 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:41:48.351309 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:41:48.351317 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:41:48.351325 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:41:48.351333 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:41:48.351341 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:41:48.351350 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:41:48.351359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:41:48.351367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:41:48.351375 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:41:48.351382 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:41:48.351390 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:41:48.351398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:41:48.351407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:41:48.351415 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:41:48.351424 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:41:48.351433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:41:48.351461 systemd-journald[217]: Collecting audit messages is disabled. Aug 5 21:41:48.351481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:48.351492 systemd-journald[217]: Journal started Aug 5 21:41:48.351512 systemd-journald[217]: Runtime Journal (/run/log/journal/d80f92d974c54c37b59d146e2385a0ca) is 8.0M, max 78.6M, 70.6M free. Aug 5 21:41:48.369249 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:41:48.377228 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:41:48.383030 systemd-modules-load[218]: Inserted module 'overlay' Aug 5 21:41:48.424108 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:41:48.387118 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:41:48.440696 kernel: Bridge firewalling registered Aug 5 21:41:48.418924 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:41:48.433064 systemd-modules-load[218]: Inserted module 'br_netfilter' Aug 5 21:41:48.433947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:48.447134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:41:48.476060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:48.493375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:41:48.504939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:41:48.534920 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:41:48.542793 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:48.558172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:41:48.567289 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:41:48.589310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:41:48.617258 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:41:48.630360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:41:48.642897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:41:48.664852 dracut-cmdline[250]: dracut-dracut-053 Aug 5 21:41:48.680652 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:41:48.672562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:41:48.675836 systemd-resolved[254]: Positive Trust Anchors: Aug 5 21:41:48.675847 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:41:48.675878 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:41:48.679271 systemd-resolved[254]: Defaulting to hostname 'linux'. Aug 5 21:41:48.696052 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:41:48.729560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:41:48.806767 kernel: SCSI subsystem initialized Aug 5 21:41:48.813766 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:41:48.826772 kernel: iscsi: registered transport (tcp) Aug 5 21:41:48.844366 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:41:48.844430 kernel: QLogic iSCSI HBA Driver Aug 5 21:41:48.878498 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:41:48.892208 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:41:48.929176 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:41:48.929218 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:41:48.935440 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:41:48.985777 kernel: raid6: neonx8 gen() 15703 MB/s Aug 5 21:41:49.005762 kernel: raid6: neonx4 gen() 15648 MB/s Aug 5 21:41:49.025759 kernel: raid6: neonx2 gen() 13246 MB/s Aug 5 21:41:49.046761 kernel: raid6: neonx1 gen() 10453 MB/s Aug 5 21:41:49.066758 kernel: raid6: int64x8 gen() 6963 MB/s Aug 5 21:41:49.086758 kernel: raid6: int64x4 gen() 7349 MB/s Aug 5 21:41:49.107760 kernel: raid6: int64x2 gen() 6128 MB/s Aug 5 21:41:49.138691 kernel: raid6: int64x1 gen() 5058 MB/s Aug 5 21:41:49.138716 kernel: raid6: using algorithm neonx8 gen() 15703 MB/s Aug 5 21:41:49.165005 kernel: raid6: .... xor() 11919 MB/s, rmw enabled Aug 5 21:41:49.165019 kernel: raid6: using neon recovery algorithm Aug 5 21:41:49.173763 kernel: xor: measuring software checksum speed Aug 5 21:41:49.177760 kernel: 8regs : 19883 MB/sec Aug 5 21:41:49.181759 kernel: 32regs : 19692 MB/sec Aug 5 21:41:49.190168 kernel: arm64_neon : 27234 MB/sec Aug 5 21:41:49.190179 kernel: xor: using function: arm64_neon (27234 MB/sec) Aug 5 21:41:49.242774 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:41:49.252892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:41:49.273927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:41:49.299094 systemd-udevd[437]: Using default interface naming scheme 'v255'. Aug 5 21:41:49.305886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:41:49.325040 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:41:49.338816 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Aug 5 21:41:49.365685 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:41:49.386380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:41:49.432577 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:41:49.456965 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:41:49.482493 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:41:49.500698 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:41:49.509099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:41:49.530951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:41:49.554485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:41:49.575421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:41:49.599947 kernel: hv_vmbus: Vmbus version:5.3 Aug 5 21:41:49.575542 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:49.656337 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 5 21:41:49.656363 kernel: hv_vmbus: registering driver hv_storvsc Aug 5 21:41:49.656373 kernel: scsi host1: storvsc_host_t Aug 5 21:41:49.656577 kernel: scsi host0: storvsc_host_t Aug 5 21:41:49.656672 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 5 21:41:49.593620 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:49.706466 kernel: hv_vmbus: registering driver hid_hyperv Aug 5 21:41:49.706494 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 5 21:41:49.706565 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 5 21:41:49.706638 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 5 21:41:49.706702 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 5 21:41:49.706743 kernel: hv_vmbus: registering driver hv_netvsc Aug 5 21:41:49.706765 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 5 21:41:49.609024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:41:49.609196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.734833 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 5 21:41:49.629465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.686827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.727338 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:41:49.743343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.778167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:49.804616 kernel: PTP clock support registered Aug 5 21:41:49.804639 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: VF slot 1 added Aug 5 21:41:49.804967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:41:49.805084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:49.847337 kernel: hv_utils: Registering HyperV Utility Driver Aug 5 21:41:49.847359 kernel: hv_vmbus: registering driver hv_utils Aug 5 21:41:49.847369 kernel: hv_utils: Shutdown IC version 3.2 Aug 5 21:41:49.847378 kernel: hv_utils: Heartbeat IC version 3.0 Aug 5 21:41:49.847387 kernel: hv_utils: TimeSync IC version 4.0 Aug 5 21:41:49.835588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:41:49.835652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.715050 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 5 21:41:49.733390 systemd-journald[217]: Time jumped backwards, rotating. Aug 5 21:41:49.733451 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 21:41:49.733460 kernel: hv_vmbus: registering driver hv_pci Aug 5 21:41:49.733468 kernel: hv_pci c8fe0eab-1db1-4132-9498-fb0ab7ee7f61: PCI VMBus probing: Using version 0x10004 Aug 5 21:41:49.835091 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 5 21:41:49.835274 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 5 21:41:49.835403 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 5 21:41:49.835500 kernel: hv_pci c8fe0eab-1db1-4132-9498-fb0ab7ee7f61: PCI host bridge to bus 1db1:00 Aug 5 21:41:49.835586 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 5 21:41:49.835670 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 5 21:41:49.835757 kernel: pci_bus 1db1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 5 21:41:49.835849 kernel: pci_bus 1db1:00: No busn resource found for root bus, will use [bus 00-ff] Aug 5 21:41:49.835926 kernel: pci 1db1:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 5 21:41:49.836033 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 5 21:41:49.836120 kernel: pci 1db1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 5 21:41:49.836249 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:49.836259 kernel: pci 1db1:00:02.0: enabling Extended Tags Aug 5 21:41:49.836352 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 5 21:41:49.836468 kernel: pci 1db1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1db1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 5 21:41:49.836559 kernel: pci_bus 1db1:00: busn_res: [bus 00-ff] end is updated to 00 Aug 5 21:41:49.836640 kernel: pci 1db1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 5 21:41:49.670447 systemd-resolved[254]: Clock change detected. Flushing caches. Aug 5 21:41:49.691293 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.740180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.793131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.815964 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:49.889999 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:49.909773 kernel: mlx5_core 1db1:00:02.0: enabling device (0000 -> 0002) Aug 5 21:41:50.121896 kernel: mlx5_core 1db1:00:02.0: firmware version: 16.30.1284 Aug 5 21:41:50.122043 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: VF registering: eth1 Aug 5 21:41:50.122139 kernel: mlx5_core 1db1:00:02.0 eth1: joined to eth0 Aug 5 21:41:50.122261 kernel: mlx5_core 1db1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Aug 5 21:41:50.130184 kernel: mlx5_core 1db1:00:02.0 enP7601s1: renamed from eth1 Aug 5 21:41:50.354446 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 5 21:41:50.493398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 5 21:41:50.510383 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (494) Aug 5 21:41:50.521298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 21:41:50.558191 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (485) Aug 5 21:41:50.572457 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 5 21:41:50.579442 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 5 21:41:50.611407 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:41:50.634200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:50.645187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:51.654592 disk-uuid[609]: The operation has completed successfully. Aug 5 21:41:51.660455 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:51.712999 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:41:51.713095 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:41:51.741303 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:41:51.754536 sh[695]: Success Aug 5 21:41:51.784193 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:41:51.966596 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:41:51.972926 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:41:51.996412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:41:52.025288 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:41:52.025344 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:52.032253 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:41:52.037350 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:41:52.041875 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:41:52.379787 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:41:52.385698 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:41:52.401483 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:41:52.409352 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:41:52.444140 kernel: BTRFS info (device sda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:52.444215 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:52.448674 kernel: BTRFS info (device sda6): using free space tree Aug 5 21:41:52.472236 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 21:41:52.480547 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:41:52.495241 kernel: BTRFS info (device sda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:52.503830 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:41:52.519458 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:41:52.569834 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:41:52.590337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:41:52.618387 systemd-networkd[879]: lo: Link UP Aug 5 21:41:52.618403 systemd-networkd[879]: lo: Gained carrier Aug 5 21:41:52.620006 systemd-networkd[879]: Enumeration completed Aug 5 21:41:52.622143 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:41:52.630510 systemd[1]: Reached target network.target - Network. Aug 5 21:41:52.634809 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:41:52.634813 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:41:52.726183 kernel: mlx5_core 1db1:00:02.0 enP7601s1: Link up Aug 5 21:41:52.776176 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: Data path switched to VF: enP7601s1 Aug 5 21:41:52.776129 systemd-networkd[879]: enP7601s1: Link UP Aug 5 21:41:52.776938 systemd-networkd[879]: eth0: Link UP Aug 5 21:41:52.777396 systemd-networkd[879]: eth0: Gained carrier Aug 5 21:41:52.777406 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:41:52.804790 systemd-networkd[879]: enP7601s1: Gained carrier Aug 5 21:41:52.818198 systemd-networkd[879]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 5 21:41:53.416678 ignition[822]: Ignition 2.19.0 Aug 5 21:41:53.416689 ignition[822]: Stage: fetch-offline Aug 5 21:41:53.420807 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:41:53.416728 ignition[822]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.416736 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.416820 ignition[822]: parsed url from cmdline: "" Aug 5 21:41:53.416823 ignition[822]: no config URL provided Aug 5 21:41:53.416827 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:41:53.448466 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 21:41:53.416833 ignition[822]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:41:53.416838 ignition[822]: failed to fetch config: resource requires networking Aug 5 21:41:53.417018 ignition[822]: Ignition finished successfully Aug 5 21:41:53.468121 ignition[890]: Ignition 2.19.0 Aug 5 21:41:53.468128 ignition[890]: Stage: fetch Aug 5 21:41:53.468340 ignition[890]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.468351 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.468445 ignition[890]: parsed url from cmdline: "" Aug 5 21:41:53.468448 ignition[890]: no config URL provided Aug 5 21:41:53.468453 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:41:53.468566 ignition[890]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:41:53.468589 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 5 21:41:53.561325 ignition[890]: GET result: OK Aug 5 21:41:53.561397 ignition[890]: config has been read from IMDS userdata Aug 5 21:41:53.561439 ignition[890]: parsing config with SHA512: d25c162d8804531d1d744e6de182cd79b324b0fe3b2103f0c489fa8b70e4c48d7bdd15f78c130a7ca01c89df9309aac216b629cbb959bd75a513f03b10f98ddb Aug 5 21:41:53.565287 unknown[890]: fetched base config from "system" Aug 5 21:41:53.565730 ignition[890]: fetch: fetch complete Aug 5 21:41:53.565303 unknown[890]: fetched base config from "system" Aug 5 21:41:53.565734 ignition[890]: fetch: fetch passed Aug 5 21:41:53.565309 unknown[890]: fetched user config from "azure" Aug 5 21:41:53.565788 ignition[890]: Ignition finished successfully Aug 5 21:41:53.569202 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 21:41:53.587289 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:41:53.614034 ignition[898]: Ignition 2.19.0 Aug 5 21:41:53.614042 ignition[898]: Stage: kargs Aug 5 21:41:53.624060 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:41:53.614333 ignition[898]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.614344 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.619756 ignition[898]: kargs: kargs passed Aug 5 21:41:53.619830 ignition[898]: Ignition finished successfully Aug 5 21:41:53.649384 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:41:53.669566 ignition[905]: Ignition 2.19.0 Aug 5 21:41:53.669574 ignition[905]: Stage: disks Aug 5 21:41:53.675471 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:41:53.669780 ignition[905]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.684264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:41:53.669790 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.695750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:41:53.670888 ignition[905]: disks: disks passed Aug 5 21:41:53.707376 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:41:53.670929 ignition[905]: Ignition finished successfully Aug 5 21:41:53.719031 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:41:53.731095 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:41:53.761365 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:41:53.846663 systemd-fsck[914]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 5 21:41:53.855246 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:41:53.872313 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:41:53.927207 kernel: EXT4-fs (sda9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:41:53.927517 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:41:53.932696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:41:53.977231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:41:53.989402 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:41:53.998905 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 5 21:41:54.012425 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:41:54.024556 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:41:54.051787 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (925) Aug 5 21:41:54.051812 kernel: BTRFS info (device sda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:54.045011 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:41:54.071741 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:54.071762 kernel: BTRFS info (device sda6): using free space tree Aug 5 21:41:54.078188 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 21:41:54.079525 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:41:54.086923 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:41:54.156347 systemd-networkd[879]: eth0: Gained IPv6LL Aug 5 21:41:54.614391 coreos-metadata[927]: Aug 05 21:41:54.614 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 21:41:54.623487 coreos-metadata[927]: Aug 05 21:41:54.621 INFO Fetch successful Aug 5 21:41:54.623487 coreos-metadata[927]: Aug 05 21:41:54.621 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 5 21:41:54.641634 coreos-metadata[927]: Aug 05 21:41:54.641 INFO Fetch successful Aug 5 21:41:54.655209 coreos-metadata[927]: Aug 05 21:41:54.655 INFO wrote hostname ci-4012.1.0-a-183bdb833d to /sysroot/etc/hostname Aug 5 21:41:54.664975 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 21:41:54.671960 systemd-networkd[879]: enP7601s1: Gained IPv6LL Aug 5 21:41:54.876107 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:41:54.907651 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:41:54.931524 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:41:54.959276 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:41:56.110717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:41:56.128464 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:41:56.136769 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:41:56.160761 kernel: BTRFS info (device sda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:56.161679 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:41:56.186860 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:41:56.199258 ignition[1045]: INFO : Ignition 2.19.0 Aug 5 21:41:56.199258 ignition[1045]: INFO : Stage: mount Aug 5 21:41:56.207017 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:56.207017 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:56.207017 ignition[1045]: INFO : mount: mount passed Aug 5 21:41:56.207017 ignition[1045]: INFO : Ignition finished successfully Aug 5 21:41:56.204720 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:41:56.235394 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:41:56.255370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:41:56.284265 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1057) Aug 5 21:41:56.298127 kernel: BTRFS info (device sda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:56.298190 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:56.302639 kernel: BTRFS info (device sda6): using free space tree Aug 5 21:41:56.310191 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 21:41:56.310642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:41:56.343434 ignition[1074]: INFO : Ignition 2.19.0 Aug 5 21:41:56.343434 ignition[1074]: INFO : Stage: files Aug 5 21:41:56.351599 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:56.351599 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:56.351599 ignition[1074]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:41:56.371422 ignition[1074]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:41:56.371422 ignition[1074]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:41:56.506451 ignition[1074]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:41:56.513922 ignition[1074]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:41:56.513922 ignition[1074]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:41:56.507617 unknown[1074]: wrote ssh authorized keys file for user: core Aug 5 21:41:56.535765 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:41:56.535765 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:41:56.648776 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 21:41:56.847583 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:41:56.847583 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Aug 5 21:41:57.286329 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 21:41:57.541761 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:57.541761 ignition[1074]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:41:57.620114 ignition[1074]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:41:57.620114 ignition[1074]: INFO : files: files passed Aug 5 21:41:57.620114 ignition[1074]: INFO : Ignition finished successfully Aug 5 21:41:57.587230 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:41:57.620468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:41:57.637342 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:41:57.656620 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:41:57.656725 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:41:57.697821 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:41:57.697821 initrd-setup-root-after-ignition[1103]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:41:57.716962 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:41:57.717642 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:41:57.732205 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:41:57.760135 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:41:57.790914 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:41:57.792266 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:41:57.806521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:41:57.819501 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:41:57.830995 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:41:57.847414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:41:57.870972 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:41:57.887324 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:41:57.902863 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:41:57.902985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:41:57.916105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:41:57.929257 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:41:57.942492 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:41:57.954386 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:41:57.954459 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:41:57.972227 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:41:57.985111 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:41:57.995927 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:41:58.007196 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:41:58.019649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:41:58.033186 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:41:58.044788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:41:58.057782 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:41:58.070909 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:41:58.082274 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:41:58.092620 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:41:58.092694 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:41:58.108505 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:41:58.120306 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:41:58.132420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:41:58.138323 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:41:58.145141 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:41:58.145218 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:41:58.163020 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:41:58.163081 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:41:58.175080 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:41:58.175130 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:41:58.185598 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 5 21:41:58.185645 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 21:41:58.215396 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:41:58.227229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:41:58.227303 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:41:58.261369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:41:58.279322 ignition[1128]: INFO : Ignition 2.19.0 Aug 5 21:41:58.279322 ignition[1128]: INFO : Stage: umount Aug 5 21:41:58.279322 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:58.279322 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:58.279322 ignition[1128]: INFO : umount: umount passed Aug 5 21:41:58.279322 ignition[1128]: INFO : Ignition finished successfully Aug 5 21:41:58.271866 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:41:58.271946 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:41:58.283410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:41:58.283469 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:41:58.299387 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:41:58.299475 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:41:58.311948 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:41:58.312060 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:41:58.322463 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:41:58.322527 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:41:58.333287 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 21:41:58.333339 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 21:41:58.344410 systemd[1]: Stopped target network.target - Network. Aug 5 21:41:58.355641 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:41:58.355710 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:41:58.367811 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:41:58.379376 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:41:58.384142 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:41:58.391803 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:41:58.397340 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:41:58.407238 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:41:58.407296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:41:58.417272 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:41:58.417315 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:41:58.429010 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:41:58.429063 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:41:58.439573 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:41:58.439623 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:41:58.450051 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:41:58.460330 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:41:58.470204 systemd-networkd[879]: eth0: DHCPv6 lease lost Aug 5 21:41:58.478133 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:41:58.478729 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:41:58.478825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:41:58.491608 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:41:58.493204 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:41:58.507780 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:41:58.509187 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:41:58.519408 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:41:58.519467 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:41:58.533827 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:41:58.533901 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:41:58.748929 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: Data path switched from VF: enP7601s1 Aug 5 21:41:58.570392 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:41:58.578566 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:41:58.578646 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:41:58.590297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:41:58.590356 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:41:58.601247 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:41:58.601294 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:41:58.612187 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:41:58.612236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:41:58.624411 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:41:58.657730 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:41:58.657904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:41:58.669279 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:41:58.669328 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:41:58.680466 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:41:58.680504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:41:58.692229 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:41:58.692279 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:41:58.707406 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:41:58.707461 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:41:58.731751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:41:58.731815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:58.777385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:41:58.791934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:41:58.792005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:41:58.806188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:41:58.806249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:58.818818 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:41:58.818925 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:41:58.852677 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:41:58.852810 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:41:58.984241 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Aug 5 21:41:58.862943 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:41:58.889366 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:41:58.905769 systemd[1]: Switching root. Aug 5 21:41:58.998832 systemd-journald[217]: Journal stopped Aug 5 21:41:48.347322 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 5 21:41:48.347345 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT Mon Aug 5 20:24:20 -00 2024 Aug 5 21:41:48.347353 kernel: KASLR enabled Aug 5 21:41:48.347361 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Aug 5 21:41:48.347367 kernel: printk: bootconsole [pl11] enabled Aug 5 21:41:48.347373 kernel: efi: EFI v2.7 by EDK II Aug 5 21:41:48.347380 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x3e925e18 Aug 5 21:41:48.347386 kernel: random: crng init done Aug 5 21:41:48.347392 kernel: ACPI: Early table checksum verification disabled Aug 5 21:41:48.347398 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Aug 5 21:41:48.347405 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347411 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347419 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 5 21:41:48.347425 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347433 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347439 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347447 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347455 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347461 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347468 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Aug 5 21:41:48.347474 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 21:41:48.347481 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Aug 5 21:41:48.347487 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Aug 5 21:41:48.347494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Aug 5 21:41:48.347500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Aug 5 21:41:48.347506 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Aug 5 21:41:48.347513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Aug 5 21:41:48.347520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Aug 5 21:41:48.347528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Aug 5 21:41:48.347534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Aug 5 21:41:48.347541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Aug 5 21:41:48.347547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Aug 5 21:41:48.347553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Aug 5 21:41:48.347560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Aug 5 21:41:48.347566 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Aug 5 21:41:48.347573 kernel: Zone ranges: Aug 5 21:41:48.347579 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Aug 5 21:41:48.347585 kernel: DMA32 empty Aug 5 21:41:48.347592 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Aug 5 21:41:48.347600 kernel: Movable zone start for each node Aug 5 21:41:48.347609 kernel: Early memory node ranges Aug 5 21:41:48.347616 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Aug 5 21:41:48.347623 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Aug 5 21:41:48.347630 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Aug 5 21:41:48.347638 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Aug 5 21:41:48.347645 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Aug 5 21:41:48.347651 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Aug 5 21:41:48.347658 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Aug 5 21:41:48.347665 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Aug 5 21:41:48.347671 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Aug 5 21:41:48.347678 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Aug 5 21:41:48.347685 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Aug 5 21:41:48.347692 kernel: psci: probing for conduit method from ACPI. Aug 5 21:41:48.347698 kernel: psci: PSCIv1.1 detected in firmware. Aug 5 21:41:48.347705 kernel: psci: Using standard PSCI v0.2 function IDs Aug 5 21:41:48.347712 kernel: psci: MIGRATE_INFO_TYPE not supported. Aug 5 21:41:48.347720 kernel: psci: SMC Calling Convention v1.4 Aug 5 21:41:48.347727 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Aug 5 21:41:48.347733 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Aug 5 21:41:48.347741 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Aug 5 21:41:48.349436 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Aug 5 21:41:48.349461 kernel: pcpu-alloc: [0] 0 [0] 1 Aug 5 21:41:48.349468 kernel: Detected PIPT I-cache on CPU0 Aug 5 21:41:48.349476 kernel: CPU features: detected: GIC system register CPU interface Aug 5 21:41:48.349483 kernel: CPU features: detected: Hardware dirty bit management Aug 5 21:41:48.349490 kernel: CPU features: detected: Spectre-BHB Aug 5 21:41:48.349497 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 5 21:41:48.349504 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 5 21:41:48.349516 kernel: CPU features: detected: ARM erratum 1418040 Aug 5 21:41:48.349523 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Aug 5 21:41:48.349530 kernel: alternatives: applying boot alternatives Aug 5 21:41:48.349538 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:41:48.349546 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 21:41:48.349554 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 21:41:48.349561 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 5 21:41:48.349568 kernel: Fallback order for Node 0: 0 Aug 5 21:41:48.349575 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Aug 5 21:41:48.349582 kernel: Policy zone: Normal Aug 5 21:41:48.349591 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 21:41:48.349598 kernel: software IO TLB: area num 2. Aug 5 21:41:48.349605 kernel: software IO TLB: mapped [mem 0x000000003a925000-0x000000003e925000] (64MB) Aug 5 21:41:48.349612 kernel: Memory: 3986332K/4194160K available (10240K kernel code, 2182K rwdata, 8072K rodata, 39040K init, 897K bss, 207828K reserved, 0K cma-reserved) Aug 5 21:41:48.349619 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 21:41:48.349626 kernel: trace event string verifier disabled Aug 5 21:41:48.349633 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 21:41:48.349641 kernel: rcu: RCU event tracing is enabled. Aug 5 21:41:48.349648 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 21:41:48.349655 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 21:41:48.349663 kernel: Tracing variant of Tasks RCU enabled. Aug 5 21:41:48.349670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 21:41:48.349679 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 21:41:48.349686 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 5 21:41:48.349693 kernel: GICv3: 960 SPIs implemented Aug 5 21:41:48.349699 kernel: GICv3: 0 Extended SPIs implemented Aug 5 21:41:48.349706 kernel: Root IRQ handler: gic_handle_irq Aug 5 21:41:48.349713 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 5 21:41:48.349720 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Aug 5 21:41:48.349727 kernel: ITS: No ITS available, not enabling LPIs Aug 5 21:41:48.349734 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 21:41:48.349741 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:41:48.349762 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 5 21:41:48.349773 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 5 21:41:48.349780 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 5 21:41:48.349788 kernel: Console: colour dummy device 80x25 Aug 5 21:41:48.349795 kernel: printk: console [tty1] enabled Aug 5 21:41:48.349802 kernel: ACPI: Core revision 20230628 Aug 5 21:41:48.349809 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 5 21:41:48.349816 kernel: pid_max: default: 32768 minimum: 301 Aug 5 21:41:48.349823 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 21:41:48.349831 kernel: SELinux: Initializing. Aug 5 21:41:48.349838 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.349846 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.349853 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:41:48.349860 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 21:41:48.349867 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Aug 5 21:41:48.349874 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Aug 5 21:41:48.349882 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 5 21:41:48.349889 kernel: rcu: Hierarchical SRCU implementation. Aug 5 21:41:48.349903 kernel: rcu: Max phase no-delay instances is 400. Aug 5 21:41:48.349910 kernel: Remapping and enabling EFI services. Aug 5 21:41:48.349918 kernel: smp: Bringing up secondary CPUs ... Aug 5 21:41:48.349925 kernel: Detected PIPT I-cache on CPU1 Aug 5 21:41:48.349934 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Aug 5 21:41:48.349942 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 5 21:41:48.349949 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 5 21:41:48.349957 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 21:41:48.349964 kernel: SMP: Total of 2 processors activated. Aug 5 21:41:48.349973 kernel: CPU features: detected: 32-bit EL0 Support Aug 5 21:41:48.349981 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Aug 5 21:41:48.349988 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 5 21:41:48.349996 kernel: CPU features: detected: CRC32 instructions Aug 5 21:41:48.350003 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 5 21:41:48.350010 kernel: CPU features: detected: LSE atomic instructions Aug 5 21:41:48.350018 kernel: CPU features: detected: Privileged Access Never Aug 5 21:41:48.350025 kernel: CPU: All CPU(s) started at EL1 Aug 5 21:41:48.350032 kernel: alternatives: applying system-wide alternatives Aug 5 21:41:48.350041 kernel: devtmpfs: initialized Aug 5 21:41:48.350049 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 21:41:48.350057 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 21:41:48.350065 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 21:41:48.350072 kernel: SMBIOS 3.1.0 present. Aug 5 21:41:48.350080 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 11/28/2023 Aug 5 21:41:48.350088 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 21:41:48.350095 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 5 21:41:48.350103 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 5 21:41:48.350112 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 5 21:41:48.350119 kernel: audit: initializing netlink subsys (disabled) Aug 5 21:41:48.350127 kernel: audit: type=2000 audit(0.046:1): state=initialized audit_enabled=0 res=1 Aug 5 21:41:48.350134 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 21:41:48.350142 kernel: cpuidle: using governor menu Aug 5 21:41:48.350149 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 5 21:41:48.350157 kernel: ASID allocator initialised with 32768 entries Aug 5 21:41:48.350165 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 21:41:48.350172 kernel: Serial: AMBA PL011 UART driver Aug 5 21:41:48.350182 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 5 21:41:48.350189 kernel: Modules: 0 pages in range for non-PLT usage Aug 5 21:41:48.350196 kernel: Modules: 509120 pages in range for PLT usage Aug 5 21:41:48.350204 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 21:41:48.350211 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 21:41:48.350219 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 5 21:41:48.350226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 5 21:41:48.350234 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 21:41:48.350241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 21:41:48.350250 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 5 21:41:48.350257 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 5 21:41:48.350265 kernel: ACPI: Added _OSI(Module Device) Aug 5 21:41:48.350273 kernel: ACPI: Added _OSI(Processor Device) Aug 5 21:41:48.350280 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 21:41:48.350288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 21:41:48.350295 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 21:41:48.350303 kernel: ACPI: Interpreter enabled Aug 5 21:41:48.350310 kernel: ACPI: Using GIC for interrupt routing Aug 5 21:41:48.350319 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Aug 5 21:41:48.350327 kernel: printk: console [ttyAMA0] enabled Aug 5 21:41:48.350334 kernel: printk: bootconsole [pl11] disabled Aug 5 21:41:48.350342 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Aug 5 21:41:48.350349 kernel: iommu: Default domain type: Translated Aug 5 21:41:48.350357 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 5 21:41:48.350365 kernel: efivars: Registered efivars operations Aug 5 21:41:48.350372 kernel: vgaarb: loaded Aug 5 21:41:48.350380 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 5 21:41:48.350388 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 21:41:48.350396 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 21:41:48.350403 kernel: pnp: PnP ACPI init Aug 5 21:41:48.350410 kernel: pnp: PnP ACPI: found 0 devices Aug 5 21:41:48.350417 kernel: NET: Registered PF_INET protocol family Aug 5 21:41:48.350425 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 21:41:48.350433 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 5 21:41:48.350440 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 21:41:48.350447 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 5 21:41:48.350457 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 5 21:41:48.350464 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 5 21:41:48.350472 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.350479 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 5 21:41:48.350487 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 21:41:48.350494 kernel: PCI: CLS 0 bytes, default 64 Aug 5 21:41:48.350502 kernel: kvm [1]: HYP mode not available Aug 5 21:41:48.350509 kernel: Initialise system trusted keyrings Aug 5 21:41:48.350517 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 5 21:41:48.350526 kernel: Key type asymmetric registered Aug 5 21:41:48.350533 kernel: Asymmetric key parser 'x509' registered Aug 5 21:41:48.350541 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 5 21:41:48.350548 kernel: io scheduler mq-deadline registered Aug 5 21:41:48.350555 kernel: io scheduler kyber registered Aug 5 21:41:48.350563 kernel: io scheduler bfq registered Aug 5 21:41:48.350570 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 21:41:48.350577 kernel: thunder_xcv, ver 1.0 Aug 5 21:41:48.350585 kernel: thunder_bgx, ver 1.0 Aug 5 21:41:48.350592 kernel: nicpf, ver 1.0 Aug 5 21:41:48.350601 kernel: nicvf, ver 1.0 Aug 5 21:41:48.350765 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 5 21:41:48.350842 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-08-05T21:41:47 UTC (1722894107) Aug 5 21:41:48.350853 kernel: efifb: probing for efifb Aug 5 21:41:48.350861 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 5 21:41:48.350868 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 5 21:41:48.350876 kernel: efifb: scrolling: redraw Aug 5 21:41:48.350886 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 5 21:41:48.350894 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 21:41:48.350901 kernel: fb0: EFI VGA frame buffer device Aug 5 21:41:48.350908 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Aug 5 21:41:48.350916 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 21:41:48.350923 kernel: No ACPI PMU IRQ for CPU0 Aug 5 21:41:48.350931 kernel: No ACPI PMU IRQ for CPU1 Aug 5 21:41:48.350938 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Aug 5 21:41:48.350946 kernel: watchdog: Delayed init of the lockup detector failed: -19 Aug 5 21:41:48.350955 kernel: watchdog: Hard watchdog permanently disabled Aug 5 21:41:48.350962 kernel: NET: Registered PF_INET6 protocol family Aug 5 21:41:48.350969 kernel: Segment Routing with IPv6 Aug 5 21:41:48.350977 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 21:41:48.350985 kernel: NET: Registered PF_PACKET protocol family Aug 5 21:41:48.350992 kernel: Key type dns_resolver registered Aug 5 21:41:48.350999 kernel: registered taskstats version 1 Aug 5 21:41:48.351007 kernel: Loading compiled-in X.509 certificates Aug 5 21:41:48.351014 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: 7b6de7a842f23ac7c1bb6bedfb9546933daaea09' Aug 5 21:41:48.351023 kernel: Key type .fscrypt registered Aug 5 21:41:48.351030 kernel: Key type fscrypt-provisioning registered Aug 5 21:41:48.351038 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 21:41:48.351045 kernel: ima: Allocated hash algorithm: sha1 Aug 5 21:41:48.351052 kernel: ima: No architecture policies found Aug 5 21:41:48.351060 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 5 21:41:48.351067 kernel: clk: Disabling unused clocks Aug 5 21:41:48.351075 kernel: Freeing unused kernel memory: 39040K Aug 5 21:41:48.351082 kernel: Run /init as init process Aug 5 21:41:48.351091 kernel: with arguments: Aug 5 21:41:48.351098 kernel: /init Aug 5 21:41:48.351105 kernel: with environment: Aug 5 21:41:48.351113 kernel: HOME=/ Aug 5 21:41:48.351120 kernel: TERM=linux Aug 5 21:41:48.351128 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 21:41:48.351137 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:41:48.351147 systemd[1]: Detected virtualization microsoft. Aug 5 21:41:48.351157 systemd[1]: Detected architecture arm64. Aug 5 21:41:48.351166 systemd[1]: Running in initrd. Aug 5 21:41:48.351174 systemd[1]: No hostname configured, using default hostname. Aug 5 21:41:48.351183 systemd[1]: Hostname set to . Aug 5 21:41:48.351191 systemd[1]: Initializing machine ID from random generator. Aug 5 21:41:48.351199 systemd[1]: Queued start job for default target initrd.target. Aug 5 21:41:48.351207 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:41:48.351215 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:41:48.351225 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 21:41:48.351233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:41:48.351241 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 21:41:48.351250 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 21:41:48.351259 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 21:41:48.351267 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 21:41:48.351275 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:41:48.351285 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:41:48.351293 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:41:48.351300 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:41:48.351309 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:41:48.351317 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:41:48.351325 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:41:48.351333 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:41:48.351341 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 21:41:48.351350 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 21:41:48.351359 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:41:48.351367 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:41:48.351375 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:41:48.351382 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:41:48.351390 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 21:41:48.351398 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:41:48.351407 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 21:41:48.351415 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 21:41:48.351424 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:41:48.351433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:41:48.351461 systemd-journald[217]: Collecting audit messages is disabled. Aug 5 21:41:48.351481 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:48.351492 systemd-journald[217]: Journal started Aug 5 21:41:48.351512 systemd-journald[217]: Runtime Journal (/run/log/journal/d80f92d974c54c37b59d146e2385a0ca) is 8.0M, max 78.6M, 70.6M free. Aug 5 21:41:48.369249 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:41:48.377228 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 21:41:48.383030 systemd-modules-load[218]: Inserted module 'overlay' Aug 5 21:41:48.424108 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 21:41:48.387118 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:41:48.440696 kernel: Bridge firewalling registered Aug 5 21:41:48.418924 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 21:41:48.433064 systemd-modules-load[218]: Inserted module 'br_netfilter' Aug 5 21:41:48.433947 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:48.447134 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:41:48.476060 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:48.493375 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:41:48.504939 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 21:41:48.534920 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:41:48.542793 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:48.558172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:41:48.567289 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 21:41:48.589310 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:41:48.617258 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 21:41:48.630360 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:41:48.642897 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:41:48.664852 dracut-cmdline[250]: dracut-dracut-053 Aug 5 21:41:48.680652 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=bb6c4f94d40caa6d83ad7b7b3f8907e11ce677871c150228b9a5377ddab3341e Aug 5 21:41:48.672562 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:41:48.675836 systemd-resolved[254]: Positive Trust Anchors: Aug 5 21:41:48.675847 systemd-resolved[254]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:41:48.675878 systemd-resolved[254]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:41:48.679271 systemd-resolved[254]: Defaulting to hostname 'linux'. Aug 5 21:41:48.696052 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:41:48.729560 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:41:48.806767 kernel: SCSI subsystem initialized Aug 5 21:41:48.813766 kernel: Loading iSCSI transport class v2.0-870. Aug 5 21:41:48.826772 kernel: iscsi: registered transport (tcp) Aug 5 21:41:48.844366 kernel: iscsi: registered transport (qla4xxx) Aug 5 21:41:48.844430 kernel: QLogic iSCSI HBA Driver Aug 5 21:41:48.878498 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 21:41:48.892208 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 21:41:48.929176 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 21:41:48.929218 kernel: device-mapper: uevent: version 1.0.3 Aug 5 21:41:48.935440 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 21:41:48.985777 kernel: raid6: neonx8 gen() 15703 MB/s Aug 5 21:41:49.005762 kernel: raid6: neonx4 gen() 15648 MB/s Aug 5 21:41:49.025759 kernel: raid6: neonx2 gen() 13246 MB/s Aug 5 21:41:49.046761 kernel: raid6: neonx1 gen() 10453 MB/s Aug 5 21:41:49.066758 kernel: raid6: int64x8 gen() 6963 MB/s Aug 5 21:41:49.086758 kernel: raid6: int64x4 gen() 7349 MB/s Aug 5 21:41:49.107760 kernel: raid6: int64x2 gen() 6128 MB/s Aug 5 21:41:49.138691 kernel: raid6: int64x1 gen() 5058 MB/s Aug 5 21:41:49.138716 kernel: raid6: using algorithm neonx8 gen() 15703 MB/s Aug 5 21:41:49.165005 kernel: raid6: .... xor() 11919 MB/s, rmw enabled Aug 5 21:41:49.165019 kernel: raid6: using neon recovery algorithm Aug 5 21:41:49.173763 kernel: xor: measuring software checksum speed Aug 5 21:41:49.177760 kernel: 8regs : 19883 MB/sec Aug 5 21:41:49.181759 kernel: 32regs : 19692 MB/sec Aug 5 21:41:49.190168 kernel: arm64_neon : 27234 MB/sec Aug 5 21:41:49.190179 kernel: xor: using function: arm64_neon (27234 MB/sec) Aug 5 21:41:49.242774 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 21:41:49.252892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:41:49.273927 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:41:49.299094 systemd-udevd[437]: Using default interface naming scheme 'v255'. Aug 5 21:41:49.305886 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:41:49.325040 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 21:41:49.338816 dracut-pre-trigger[451]: rd.md=0: removing MD RAID activation Aug 5 21:41:49.365685 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:41:49.386380 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:41:49.432577 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:41:49.456965 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 21:41:49.482493 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 21:41:49.500698 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:41:49.509099 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:41:49.530951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:41:49.554485 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 21:41:49.575421 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:41:49.599947 kernel: hv_vmbus: Vmbus version:5.3 Aug 5 21:41:49.575542 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:49.656337 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 5 21:41:49.656363 kernel: hv_vmbus: registering driver hv_storvsc Aug 5 21:41:49.656373 kernel: scsi host1: storvsc_host_t Aug 5 21:41:49.656577 kernel: scsi host0: storvsc_host_t Aug 5 21:41:49.656672 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 5 21:41:49.593620 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:49.706466 kernel: hv_vmbus: registering driver hid_hyperv Aug 5 21:41:49.706494 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 5 21:41:49.706565 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 5 21:41:49.706638 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 5 21:41:49.706702 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 5 21:41:49.706743 kernel: hv_vmbus: registering driver hv_netvsc Aug 5 21:41:49.706765 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 5 21:41:49.609024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:41:49.609196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.734833 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 5 21:41:49.629465 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.686827 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.727338 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:41:49.743343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.778167 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:49.804616 kernel: PTP clock support registered Aug 5 21:41:49.804639 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: VF slot 1 added Aug 5 21:41:49.804967 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:41:49.805084 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:49.847337 kernel: hv_utils: Registering HyperV Utility Driver Aug 5 21:41:49.847359 kernel: hv_vmbus: registering driver hv_utils Aug 5 21:41:49.847369 kernel: hv_utils: Shutdown IC version 3.2 Aug 5 21:41:49.847378 kernel: hv_utils: Heartbeat IC version 3.0 Aug 5 21:41:49.847387 kernel: hv_utils: TimeSync IC version 4.0 Aug 5 21:41:49.835588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:41:49.835652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.715050 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 5 21:41:49.733390 systemd-journald[217]: Time jumped backwards, rotating. Aug 5 21:41:49.733451 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 21:41:49.733460 kernel: hv_vmbus: registering driver hv_pci Aug 5 21:41:49.733468 kernel: hv_pci c8fe0eab-1db1-4132-9498-fb0ab7ee7f61: PCI VMBus probing: Using version 0x10004 Aug 5 21:41:49.835091 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 5 21:41:49.835274 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 5 21:41:49.835403 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 5 21:41:49.835500 kernel: hv_pci c8fe0eab-1db1-4132-9498-fb0ab7ee7f61: PCI host bridge to bus 1db1:00 Aug 5 21:41:49.835586 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 5 21:41:49.835670 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 5 21:41:49.835757 kernel: pci_bus 1db1:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Aug 5 21:41:49.835849 kernel: pci_bus 1db1:00: No busn resource found for root bus, will use [bus 00-ff] Aug 5 21:41:49.835926 kernel: pci 1db1:00:02.0: [15b3:1018] type 00 class 0x020000 Aug 5 21:41:49.836033 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 5 21:41:49.836120 kernel: pci 1db1:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 5 21:41:49.836249 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:49.836259 kernel: pci 1db1:00:02.0: enabling Extended Tags Aug 5 21:41:49.836352 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 5 21:41:49.836468 kernel: pci 1db1:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 1db1:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Aug 5 21:41:49.836559 kernel: pci_bus 1db1:00: busn_res: [bus 00-ff] end is updated to 00 Aug 5 21:41:49.836640 kernel: pci 1db1:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Aug 5 21:41:49.670447 systemd-resolved[254]: Clock change detected. Flushing caches. Aug 5 21:41:49.691293 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.740180 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:41:49.793131 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:49.815964 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 21:41:49.889999 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:49.909773 kernel: mlx5_core 1db1:00:02.0: enabling device (0000 -> 0002) Aug 5 21:41:50.121896 kernel: mlx5_core 1db1:00:02.0: firmware version: 16.30.1284 Aug 5 21:41:50.122043 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: VF registering: eth1 Aug 5 21:41:50.122139 kernel: mlx5_core 1db1:00:02.0 eth1: joined to eth0 Aug 5 21:41:50.122261 kernel: mlx5_core 1db1:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Aug 5 21:41:50.130184 kernel: mlx5_core 1db1:00:02.0 enP7601s1: renamed from eth1 Aug 5 21:41:50.354446 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 5 21:41:50.493398 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 5 21:41:50.510383 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (494) Aug 5 21:41:50.521298 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 21:41:50.558191 kernel: BTRFS: device fsid 8a9ab799-ab52-4671-9234-72d7c6e57b99 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (485) Aug 5 21:41:50.572457 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 5 21:41:50.579442 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 5 21:41:50.611407 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 21:41:50.634200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:50.645187 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:51.654592 disk-uuid[609]: The operation has completed successfully. Aug 5 21:41:51.660455 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 21:41:51.712999 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 21:41:51.713095 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 21:41:51.741303 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 21:41:51.754536 sh[695]: Success Aug 5 21:41:51.784193 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Aug 5 21:41:51.966596 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 21:41:51.972926 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 21:41:51.996412 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 21:41:52.025288 kernel: BTRFS info (device dm-0): first mount of filesystem 8a9ab799-ab52-4671-9234-72d7c6e57b99 Aug 5 21:41:52.025344 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:52.032253 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 21:41:52.037350 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 21:41:52.041875 kernel: BTRFS info (device dm-0): using free space tree Aug 5 21:41:52.379787 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 21:41:52.385698 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 21:41:52.401483 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 21:41:52.409352 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 21:41:52.444140 kernel: BTRFS info (device sda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:52.444215 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:52.448674 kernel: BTRFS info (device sda6): using free space tree Aug 5 21:41:52.472236 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 21:41:52.480547 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 21:41:52.495241 kernel: BTRFS info (device sda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:52.503830 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 21:41:52.519458 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 21:41:52.569834 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:41:52.590337 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:41:52.618387 systemd-networkd[879]: lo: Link UP Aug 5 21:41:52.618403 systemd-networkd[879]: lo: Gained carrier Aug 5 21:41:52.620006 systemd-networkd[879]: Enumeration completed Aug 5 21:41:52.622143 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:41:52.630510 systemd[1]: Reached target network.target - Network. Aug 5 21:41:52.634809 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:41:52.634813 systemd-networkd[879]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:41:52.726183 kernel: mlx5_core 1db1:00:02.0 enP7601s1: Link up Aug 5 21:41:52.776176 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: Data path switched to VF: enP7601s1 Aug 5 21:41:52.776129 systemd-networkd[879]: enP7601s1: Link UP Aug 5 21:41:52.776938 systemd-networkd[879]: eth0: Link UP Aug 5 21:41:52.777396 systemd-networkd[879]: eth0: Gained carrier Aug 5 21:41:52.777406 systemd-networkd[879]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:41:52.804790 systemd-networkd[879]: enP7601s1: Gained carrier Aug 5 21:41:52.818198 systemd-networkd[879]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 5 21:41:53.416678 ignition[822]: Ignition 2.19.0 Aug 5 21:41:53.416689 ignition[822]: Stage: fetch-offline Aug 5 21:41:53.420807 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:41:53.416728 ignition[822]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.416736 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.416820 ignition[822]: parsed url from cmdline: "" Aug 5 21:41:53.416823 ignition[822]: no config URL provided Aug 5 21:41:53.416827 ignition[822]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:41:53.448466 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 21:41:53.416833 ignition[822]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:41:53.416838 ignition[822]: failed to fetch config: resource requires networking Aug 5 21:41:53.417018 ignition[822]: Ignition finished successfully Aug 5 21:41:53.468121 ignition[890]: Ignition 2.19.0 Aug 5 21:41:53.468128 ignition[890]: Stage: fetch Aug 5 21:41:53.468340 ignition[890]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.468351 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.468445 ignition[890]: parsed url from cmdline: "" Aug 5 21:41:53.468448 ignition[890]: no config URL provided Aug 5 21:41:53.468453 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 21:41:53.468566 ignition[890]: no config at "/usr/lib/ignition/user.ign" Aug 5 21:41:53.468589 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 5 21:41:53.561325 ignition[890]: GET result: OK Aug 5 21:41:53.561397 ignition[890]: config has been read from IMDS userdata Aug 5 21:41:53.561439 ignition[890]: parsing config with SHA512: d25c162d8804531d1d744e6de182cd79b324b0fe3b2103f0c489fa8b70e4c48d7bdd15f78c130a7ca01c89df9309aac216b629cbb959bd75a513f03b10f98ddb Aug 5 21:41:53.565287 unknown[890]: fetched base config from "system" Aug 5 21:41:53.565730 ignition[890]: fetch: fetch complete Aug 5 21:41:53.565303 unknown[890]: fetched base config from "system" Aug 5 21:41:53.565734 ignition[890]: fetch: fetch passed Aug 5 21:41:53.565309 unknown[890]: fetched user config from "azure" Aug 5 21:41:53.565788 ignition[890]: Ignition finished successfully Aug 5 21:41:53.569202 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 21:41:53.587289 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 21:41:53.614034 ignition[898]: Ignition 2.19.0 Aug 5 21:41:53.614042 ignition[898]: Stage: kargs Aug 5 21:41:53.624060 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 21:41:53.614333 ignition[898]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.614344 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.619756 ignition[898]: kargs: kargs passed Aug 5 21:41:53.619830 ignition[898]: Ignition finished successfully Aug 5 21:41:53.649384 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 21:41:53.669566 ignition[905]: Ignition 2.19.0 Aug 5 21:41:53.669574 ignition[905]: Stage: disks Aug 5 21:41:53.675471 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 21:41:53.669780 ignition[905]: no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:53.684264 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 21:41:53.669790 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:53.695750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 21:41:53.670888 ignition[905]: disks: disks passed Aug 5 21:41:53.707376 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:41:53.670929 ignition[905]: Ignition finished successfully Aug 5 21:41:53.719031 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:41:53.731095 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:41:53.761365 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 21:41:53.846663 systemd-fsck[914]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 5 21:41:53.855246 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 21:41:53.872313 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 21:41:53.927207 kernel: EXT4-fs (sda9): mounted filesystem ec701988-3dff-4e7d-a2a2-79d78965de5d r/w with ordered data mode. Quota mode: none. Aug 5 21:41:53.927517 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 21:41:53.932696 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 21:41:53.977231 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:41:53.989402 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 21:41:53.998905 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 5 21:41:54.012425 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 21:41:54.024556 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:41:54.051787 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (925) Aug 5 21:41:54.051812 kernel: BTRFS info (device sda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:54.045011 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 21:41:54.071741 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:54.071762 kernel: BTRFS info (device sda6): using free space tree Aug 5 21:41:54.078188 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 21:41:54.079525 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 21:41:54.086923 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:41:54.156347 systemd-networkd[879]: eth0: Gained IPv6LL Aug 5 21:41:54.614391 coreos-metadata[927]: Aug 05 21:41:54.614 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 21:41:54.623487 coreos-metadata[927]: Aug 05 21:41:54.621 INFO Fetch successful Aug 5 21:41:54.623487 coreos-metadata[927]: Aug 05 21:41:54.621 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 5 21:41:54.641634 coreos-metadata[927]: Aug 05 21:41:54.641 INFO Fetch successful Aug 5 21:41:54.655209 coreos-metadata[927]: Aug 05 21:41:54.655 INFO wrote hostname ci-4012.1.0-a-183bdb833d to /sysroot/etc/hostname Aug 5 21:41:54.664975 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 21:41:54.671960 systemd-networkd[879]: enP7601s1: Gained IPv6LL Aug 5 21:41:54.876107 initrd-setup-root[955]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 21:41:54.907651 initrd-setup-root[962]: cut: /sysroot/etc/group: No such file or directory Aug 5 21:41:54.931524 initrd-setup-root[969]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 21:41:54.959276 initrd-setup-root[976]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 21:41:56.110717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 21:41:56.128464 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 21:41:56.136769 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 21:41:56.160761 kernel: BTRFS info (device sda6): last unmount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:56.161679 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 21:41:56.186860 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 21:41:56.199258 ignition[1045]: INFO : Ignition 2.19.0 Aug 5 21:41:56.199258 ignition[1045]: INFO : Stage: mount Aug 5 21:41:56.207017 ignition[1045]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:56.207017 ignition[1045]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:56.207017 ignition[1045]: INFO : mount: mount passed Aug 5 21:41:56.207017 ignition[1045]: INFO : Ignition finished successfully Aug 5 21:41:56.204720 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 21:41:56.235394 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 21:41:56.255370 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 21:41:56.284265 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1057) Aug 5 21:41:56.298127 kernel: BTRFS info (device sda6): first mount of filesystem 2fbfcd26-f9be-477f-9b31-7e91608e027d Aug 5 21:41:56.298190 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Aug 5 21:41:56.302639 kernel: BTRFS info (device sda6): using free space tree Aug 5 21:41:56.310191 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 21:41:56.310642 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 21:41:56.343434 ignition[1074]: INFO : Ignition 2.19.0 Aug 5 21:41:56.343434 ignition[1074]: INFO : Stage: files Aug 5 21:41:56.351599 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:56.351599 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:56.351599 ignition[1074]: DEBUG : files: compiled without relabeling support, skipping Aug 5 21:41:56.371422 ignition[1074]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 21:41:56.371422 ignition[1074]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 21:41:56.506451 ignition[1074]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 21:41:56.513922 ignition[1074]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 21:41:56.513922 ignition[1074]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 21:41:56.507617 unknown[1074]: wrote ssh authorized keys file for user: core Aug 5 21:41:56.535765 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:41:56.535765 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Aug 5 21:41:56.648776 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 21:41:56.847583 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Aug 5 21:41:56.847583 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:56.869775 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Aug 5 21:41:57.286329 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 21:41:57.541761 ignition[1074]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Aug 5 21:41:57.541761 ignition[1074]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 21:41:57.561350 ignition[1074]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:41:57.620114 ignition[1074]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 21:41:57.620114 ignition[1074]: INFO : files: files passed Aug 5 21:41:57.620114 ignition[1074]: INFO : Ignition finished successfully Aug 5 21:41:57.587230 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 21:41:57.620468 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 21:41:57.637342 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 21:41:57.656620 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 21:41:57.656725 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 21:41:57.697821 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:41:57.697821 initrd-setup-root-after-ignition[1103]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:41:57.716962 initrd-setup-root-after-ignition[1107]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 21:41:57.717642 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:41:57.732205 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 21:41:57.760135 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 21:41:57.790914 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 21:41:57.792266 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 21:41:57.806521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 21:41:57.819501 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 21:41:57.830995 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 21:41:57.847414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 21:41:57.870972 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:41:57.887324 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 21:41:57.902863 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 21:41:57.902985 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 21:41:57.916105 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:41:57.929257 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:41:57.942492 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 21:41:57.954386 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 21:41:57.954459 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 21:41:57.972227 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 21:41:57.985111 systemd[1]: Stopped target basic.target - Basic System. Aug 5 21:41:57.995927 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 21:41:58.007196 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 21:41:58.019649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 21:41:58.033186 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 21:41:58.044788 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 21:41:58.057782 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 21:41:58.070909 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 21:41:58.082274 systemd[1]: Stopped target swap.target - Swaps. Aug 5 21:41:58.092620 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 21:41:58.092694 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 21:41:58.108505 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:41:58.120306 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:41:58.132420 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 21:41:58.138323 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:41:58.145141 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 21:41:58.145218 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 21:41:58.163020 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 21:41:58.163081 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 21:41:58.175080 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 21:41:58.175130 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 21:41:58.185598 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 5 21:41:58.185645 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 21:41:58.215396 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 21:41:58.227229 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 21:41:58.227303 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:41:58.261369 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 21:41:58.279322 ignition[1128]: INFO : Ignition 2.19.0 Aug 5 21:41:58.279322 ignition[1128]: INFO : Stage: umount Aug 5 21:41:58.279322 ignition[1128]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 21:41:58.279322 ignition[1128]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 21:41:58.279322 ignition[1128]: INFO : umount: umount passed Aug 5 21:41:58.279322 ignition[1128]: INFO : Ignition finished successfully Aug 5 21:41:58.271866 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 21:41:58.271946 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:41:58.283410 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 21:41:58.283469 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 21:41:58.299387 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 21:41:58.299475 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 21:41:58.311948 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 21:41:58.312060 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 21:41:58.322463 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 21:41:58.322527 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 21:41:58.333287 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 21:41:58.333339 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 21:41:58.344410 systemd[1]: Stopped target network.target - Network. Aug 5 21:41:58.355641 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 21:41:58.355710 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 21:41:58.367811 systemd[1]: Stopped target paths.target - Path Units. Aug 5 21:41:58.379376 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 21:41:58.384142 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:41:58.391803 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 21:41:58.397340 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 21:41:58.407238 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 21:41:58.407296 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 21:41:58.417272 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 21:41:58.417315 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 21:41:58.429010 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 21:41:58.429063 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 21:41:58.439573 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 21:41:58.439623 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 21:41:58.450051 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 21:41:58.460330 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 21:41:58.470204 systemd-networkd[879]: eth0: DHCPv6 lease lost Aug 5 21:41:58.478133 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 21:41:58.478729 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 21:41:58.478825 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 21:41:58.491608 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 21:41:58.493204 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 21:41:58.507780 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 21:41:58.509187 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 21:41:58.519408 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 21:41:58.519467 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:41:58.533827 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 21:41:58.533901 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 21:41:58.748929 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: Data path switched from VF: enP7601s1 Aug 5 21:41:58.570392 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 21:41:58.578566 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 21:41:58.578646 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 21:41:58.590297 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 21:41:58.590356 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:41:58.601247 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 21:41:58.601294 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 21:41:58.612187 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 21:41:58.612236 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:41:58.624411 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:41:58.657730 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 21:41:58.657904 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:41:58.669279 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 21:41:58.669328 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 21:41:58.680466 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 21:41:58.680504 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:41:58.692229 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 21:41:58.692279 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 21:41:58.707406 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 21:41:58.707461 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 21:41:58.731751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 21:41:58.731815 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 21:41:58.777385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 21:41:58.791934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 21:41:58.792005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:41:58.806188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:41:58.806249 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:41:58.818818 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 21:41:58.818925 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 21:41:58.852677 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 21:41:58.852810 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 21:41:58.984241 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Aug 5 21:41:58.862943 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 21:41:58.889366 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 21:41:58.905769 systemd[1]: Switching root. Aug 5 21:41:58.998832 systemd-journald[217]: Journal stopped Aug 5 21:42:04.510841 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 21:42:04.510864 kernel: SELinux: policy capability open_perms=1 Aug 5 21:42:04.510875 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 21:42:04.510884 kernel: SELinux: policy capability always_check_network=0 Aug 5 21:42:04.510892 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 21:42:04.510900 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 21:42:04.510909 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 21:42:04.510917 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 21:42:04.510926 kernel: audit: type=1403 audit(1722894121.358:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 21:42:04.510935 systemd[1]: Successfully loaded SELinux policy in 160.608ms. Aug 5 21:42:04.510947 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.808ms. Aug 5 21:42:04.510957 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 21:42:04.510966 systemd[1]: Detected virtualization microsoft. Aug 5 21:42:04.510975 systemd[1]: Detected architecture arm64. Aug 5 21:42:04.510984 systemd[1]: Detected first boot. Aug 5 21:42:04.510995 systemd[1]: Hostname set to . Aug 5 21:42:04.511004 systemd[1]: Initializing machine ID from random generator. Aug 5 21:42:04.511015 zram_generator::config[1168]: No configuration found. Aug 5 21:42:04.511025 systemd[1]: Populated /etc with preset unit settings. Aug 5 21:42:04.511034 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 21:42:04.511043 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 21:42:04.511053 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 21:42:04.511063 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 21:42:04.511072 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 21:42:04.511082 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 21:42:04.511091 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 21:42:04.511101 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 21:42:04.511110 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 21:42:04.511121 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 21:42:04.511131 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 21:42:04.511140 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 21:42:04.511149 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 21:42:04.511182 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 21:42:04.511192 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 21:42:04.511202 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 21:42:04.511212 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 21:42:04.511221 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 5 21:42:04.511232 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 21:42:04.511243 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 21:42:04.511253 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 21:42:04.511264 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 21:42:04.511274 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 21:42:04.511283 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 21:42:04.511293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 21:42:04.511303 systemd[1]: Reached target slices.target - Slice Units. Aug 5 21:42:04.511313 systemd[1]: Reached target swap.target - Swaps. Aug 5 21:42:04.511322 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 21:42:04.511332 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 21:42:04.511341 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 21:42:04.511350 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 21:42:04.511362 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 21:42:04.511372 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 21:42:04.511381 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 21:42:04.511391 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 21:42:04.511400 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 21:42:04.511410 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 21:42:04.511419 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 21:42:04.511430 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 21:42:04.511440 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 21:42:04.511451 systemd[1]: Reached target machines.target - Containers. Aug 5 21:42:04.511460 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 21:42:04.511470 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:42:04.511480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 21:42:04.511489 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 21:42:04.511499 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:42:04.511508 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:42:04.511519 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:42:04.511529 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 21:42:04.511538 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:42:04.511548 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 21:42:04.511557 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 21:42:04.511567 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 21:42:04.511576 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 21:42:04.511586 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 21:42:04.511597 kernel: fuse: init (API version 7.39) Aug 5 21:42:04.511606 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 21:42:04.511615 kernel: loop: module loaded Aug 5 21:42:04.511624 kernel: ACPI: bus type drm_connector registered Aug 5 21:42:04.511633 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 21:42:04.511657 systemd-journald[1270]: Collecting audit messages is disabled. Aug 5 21:42:04.511679 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 21:42:04.511690 systemd-journald[1270]: Journal started Aug 5 21:42:04.511710 systemd-journald[1270]: Runtime Journal (/run/log/journal/70e0babd3b7a454f822f99c344b91965) is 8.0M, max 78.6M, 70.6M free. Aug 5 21:42:03.504038 systemd[1]: Queued start job for default target multi-user.target. Aug 5 21:42:03.606537 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 5 21:42:03.606906 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 21:42:03.607254 systemd[1]: systemd-journald.service: Consumed 3.266s CPU time. Aug 5 21:42:04.538532 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 21:42:04.554355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 21:42:04.564465 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 21:42:04.564508 systemd[1]: Stopped verity-setup.service. Aug 5 21:42:04.583589 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 21:42:04.584432 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 21:42:04.590731 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 21:42:04.597999 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 21:42:04.603927 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 21:42:04.610830 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 21:42:04.617470 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 21:42:04.623295 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 21:42:04.630760 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 21:42:04.638112 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 21:42:04.638269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 21:42:04.645518 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:42:04.645654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:42:04.652508 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:42:04.652634 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:42:04.659190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:42:04.659318 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:42:04.666434 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 21:42:04.666561 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 21:42:04.673336 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:42:04.673466 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:42:04.682182 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 21:42:04.689001 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 21:42:04.696835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 21:42:04.704587 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 21:42:04.720822 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 21:42:04.731268 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 21:42:04.742317 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 21:42:04.749213 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 21:42:04.749253 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 21:42:04.756263 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 21:42:04.764564 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 21:42:04.772482 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 21:42:04.778512 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:42:04.794114 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 21:42:04.803353 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 21:42:04.810123 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:42:04.811386 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 21:42:04.817976 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:42:04.820356 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 21:42:04.830453 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 21:42:04.855352 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 21:42:04.864953 systemd-journald[1270]: Time spent on flushing to /var/log/journal/70e0babd3b7a454f822f99c344b91965 is 91.187ms for 902 entries. Aug 5 21:42:04.864953 systemd-journald[1270]: System Journal (/var/log/journal/70e0babd3b7a454f822f99c344b91965) is 11.8M, max 2.6G, 2.6G free. Aug 5 21:42:05.027772 systemd-journald[1270]: Received client request to flush runtime journal. Aug 5 21:42:05.027839 systemd-journald[1270]: /var/log/journal/70e0babd3b7a454f822f99c344b91965/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Aug 5 21:42:05.027870 systemd-journald[1270]: Rotating system journal. Aug 5 21:42:05.027900 kernel: loop0: detected capacity change from 0 to 59688 Aug 5 21:42:05.027921 kernel: block loop0: the capability attribute has been deprecated. Aug 5 21:42:04.866810 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 21:42:04.906312 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 21:42:04.913762 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 21:42:04.925820 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 21:42:04.934930 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 21:42:04.954481 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 21:42:04.974625 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 21:42:04.990716 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 21:42:05.003292 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 5 21:42:05.027591 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 21:42:05.034690 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 21:42:05.053384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 21:42:05.077262 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 21:42:05.077885 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 21:42:05.112377 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Aug 5 21:42:05.112395 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Aug 5 21:42:05.118201 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 21:42:05.379209 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 21:42:05.423198 kernel: loop1: detected capacity change from 0 to 62152 Aug 5 21:42:05.701199 kernel: loop2: detected capacity change from 0 to 194512 Aug 5 21:42:05.746192 kernel: loop3: detected capacity change from 0 to 113712 Aug 5 21:42:05.974131 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 21:42:05.985360 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 21:42:06.016855 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Aug 5 21:42:06.131201 kernel: loop4: detected capacity change from 0 to 59688 Aug 5 21:42:06.139170 kernel: loop5: detected capacity change from 0 to 62152 Aug 5 21:42:06.147176 kernel: loop6: detected capacity change from 0 to 194512 Aug 5 21:42:06.157172 kernel: loop7: detected capacity change from 0 to 113712 Aug 5 21:42:06.160502 (sd-merge)[1329]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 5 21:42:06.160917 (sd-merge)[1329]: Merged extensions into '/usr'. Aug 5 21:42:06.164783 systemd[1]: Reloading requested from client PID 1301 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 21:42:06.164796 systemd[1]: Reloading... Aug 5 21:42:06.229257 zram_generator::config[1354]: No configuration found. Aug 5 21:42:06.358196 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1407) Aug 5 21:42:06.386272 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:42:06.470578 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 5 21:42:06.471341 systemd[1]: Reloading finished in 305 ms. Aug 5 21:42:06.482188 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 21:42:06.501090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 21:42:06.518200 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 21:42:06.533942 kernel: hv_vmbus: registering driver hv_balloon Aug 5 21:42:06.534063 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 5 21:42:06.540330 kernel: hv_balloon: Memory hot add disabled on ARM64 Aug 5 21:42:06.547055 kernel: hv_vmbus: registering driver hyperv_fb Aug 5 21:42:06.547178 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 5 21:42:06.551276 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 5 21:42:06.551626 systemd[1]: Starting ensure-sysext.service... Aug 5 21:42:06.563503 kernel: Console: switching to colour dummy device 80x25 Aug 5 21:42:06.573796 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 21:42:06.581412 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 21:42:06.598327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 21:42:06.627403 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:42:06.640246 systemd[1]: Reloading requested from client PID 1446 ('systemctl') (unit ensure-sysext.service)... Aug 5 21:42:06.640275 systemd[1]: Reloading... Aug 5 21:42:06.664967 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 21:42:06.665304 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 21:42:06.665959 systemd-tmpfiles[1452]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 21:42:06.669207 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Aug 5 21:42:06.669319 systemd-tmpfiles[1452]: ACLs are not supported, ignoring. Aug 5 21:42:06.698229 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1419) Aug 5 21:42:06.698462 systemd-tmpfiles[1452]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:42:06.698479 systemd-tmpfiles[1452]: Skipping /boot Aug 5 21:42:06.732409 systemd-tmpfiles[1452]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 21:42:06.732545 systemd-tmpfiles[1452]: Skipping /boot Aug 5 21:42:06.787213 zram_generator::config[1502]: No configuration found. Aug 5 21:42:06.890311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:42:06.973677 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 21:42:06.980337 systemd[1]: Reloading finished in 339 ms. Aug 5 21:42:07.000042 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 21:42:07.013018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:42:07.052684 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:42:07.060131 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 21:42:07.067281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:42:07.069620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:42:07.080451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:42:07.095960 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:42:07.102359 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:42:07.108099 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 21:42:07.116567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 21:42:07.129008 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 21:42:07.145683 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 21:42:07.160002 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 21:42:07.166399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 21:42:07.166616 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:42:07.174301 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:42:07.186842 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 21:42:07.197617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:42:07.197779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:42:07.206001 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:42:07.206316 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:42:07.214128 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:42:07.214449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:42:07.223753 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 21:42:07.244353 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 21:42:07.253018 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 21:42:07.260384 augenrules[1604]: No rules Aug 5 21:42:07.265203 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:42:07.276272 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 21:42:07.295696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 21:42:07.304523 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 21:42:07.314515 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 21:42:07.327110 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 21:42:07.342551 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 21:42:07.365527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 21:42:07.371678 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 21:42:07.371972 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 21:42:07.383749 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 21:42:07.392468 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 21:42:07.396229 lvm[1621]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:42:07.394394 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 21:42:07.402660 systemd-resolved[1589]: Positive Trust Anchors: Aug 5 21:42:07.402880 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 21:42:07.404219 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 21:42:07.405222 systemd-resolved[1589]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 21:42:07.405256 systemd-resolved[1589]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 21:42:07.411196 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 21:42:07.411341 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 21:42:07.418942 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 21:42:07.419080 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 21:42:07.429196 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 21:42:07.430909 systemd-networkd[1451]: lo: Link UP Aug 5 21:42:07.431211 systemd-networkd[1451]: lo: Gained carrier Aug 5 21:42:07.433176 systemd-networkd[1451]: Enumeration completed Aug 5 21:42:07.433633 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:42:07.433726 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:42:07.436611 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 21:42:07.438797 systemd-resolved[1589]: Using system hostname 'ci-4012.1.0-a-183bdb833d'. Aug 5 21:42:07.443953 systemd[1]: Finished ensure-sysext.service. Aug 5 21:42:07.448795 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 21:42:07.463859 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 21:42:07.475324 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 21:42:07.485386 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 21:42:07.486689 lvm[1636]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 21:42:07.498252 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 21:42:07.498335 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 21:42:07.502224 kernel: mlx5_core 1db1:00:02.0 enP7601s1: Link up Aug 5 21:42:07.521911 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 21:42:07.528179 kernel: hv_netvsc 00224879-7f2f-0022-4879-7f2f00224879 eth0: Data path switched to VF: enP7601s1 Aug 5 21:42:07.534938 systemd-networkd[1451]: enP7601s1: Link UP Aug 5 21:42:07.535031 systemd-networkd[1451]: eth0: Link UP Aug 5 21:42:07.535035 systemd-networkd[1451]: eth0: Gained carrier Aug 5 21:42:07.535050 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:42:07.536797 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 21:42:07.543113 systemd[1]: Reached target network.target - Network. Aug 5 21:42:07.547999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 21:42:07.557555 systemd-networkd[1451]: enP7601s1: Gained carrier Aug 5 21:42:07.565228 systemd-networkd[1451]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 5 21:42:08.078681 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 21:42:08.085899 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 21:42:08.684274 systemd-networkd[1451]: eth0: Gained IPv6LL Aug 5 21:42:08.686228 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 21:42:08.693840 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 21:42:08.876284 systemd-networkd[1451]: enP7601s1: Gained IPv6LL Aug 5 21:42:10.853963 ldconfig[1296]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 21:42:10.868678 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 21:42:10.880351 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 21:42:10.893921 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 21:42:10.901828 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 21:42:10.907636 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 21:42:10.914822 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 21:42:10.921942 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 21:42:10.927936 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 21:42:10.934698 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 21:42:10.941648 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 21:42:10.941776 systemd[1]: Reached target paths.target - Path Units. Aug 5 21:42:10.946675 systemd[1]: Reached target timers.target - Timer Units. Aug 5 21:42:10.952504 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 21:42:10.960098 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 21:42:10.968850 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 21:42:10.975472 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 21:42:10.981551 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 21:42:10.986856 systemd[1]: Reached target basic.target - Basic System. Aug 5 21:42:10.992188 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:42:10.992294 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 21:42:11.001290 systemd[1]: Starting chronyd.service - NTP client/server... Aug 5 21:42:11.010314 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 21:42:11.021331 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 21:42:11.029829 (chronyd)[1646]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 5 21:42:11.042397 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 21:42:11.048711 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 21:42:11.055436 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 21:42:11.060941 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 21:42:11.064311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:42:11.072966 chronyd[1657]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 5 21:42:11.078002 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 21:42:11.084043 jq[1652]: false Aug 5 21:42:11.085701 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 21:42:11.093673 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 21:42:11.102377 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 21:42:11.112358 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 21:42:11.125242 extend-filesystems[1653]: Found loop4 Aug 5 21:42:11.125242 extend-filesystems[1653]: Found loop5 Aug 5 21:42:11.125242 extend-filesystems[1653]: Found loop6 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found loop7 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda1 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda2 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda3 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found usr Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda4 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda6 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda7 Aug 5 21:42:11.174501 extend-filesystems[1653]: Found sda9 Aug 5 21:42:11.174501 extend-filesystems[1653]: Checking size of /dev/sda9 Aug 5 21:42:11.434917 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1696) Aug 5 21:42:11.127876 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 21:42:11.132004 chronyd[1657]: Timezone right/UTC failed leap second check, ignoring Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.320 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.328 INFO Fetch successful Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.328 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.333 INFO Fetch successful Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.333 INFO Fetching http://168.63.129.16/machine/f2e630d7-bf60-49c1-b2fb-144ca09fc865/491aedc5%2D4b57%2D4681%2Da7ea%2D8709dde54618.%5Fci%2D4012.1.0%2Da%2D183bdb833d?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.335 INFO Fetch successful Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.336 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 5 21:42:11.438055 coreos-metadata[1648]: Aug 05 21:42:11.353 INFO Fetch successful Aug 5 21:42:11.438375 extend-filesystems[1653]: Old size kept for /dev/sda9 Aug 5 21:42:11.438375 extend-filesystems[1653]: Found sr0 Aug 5 21:42:11.143876 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 21:42:11.132219 chronyd[1657]: Loaded seccomp filter (level 2) Aug 5 21:42:11.509907 update_engine[1676]: I0805 21:42:11.258653 1676 main.cc:92] Flatcar Update Engine starting Aug 5 21:42:11.509907 update_engine[1676]: I0805 21:42:11.303790 1676 update_check_scheduler.cc:74] Next update check in 8m56s Aug 5 21:42:11.144442 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 21:42:11.181149 dbus-daemon[1649]: [system] SELinux support is enabled Aug 5 21:42:11.510465 jq[1677]: true Aug 5 21:42:11.150436 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 21:42:11.349102 dbus-daemon[1649]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 5 21:42:11.163434 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 21:42:11.185388 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 21:42:11.510909 tar[1695]: linux-arm64/helm Aug 5 21:42:11.201484 systemd[1]: Started chronyd.service - NTP client/server. Aug 5 21:42:11.218818 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 21:42:11.514669 jq[1700]: true Aug 5 21:42:11.219080 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 21:42:11.219366 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 21:42:11.221198 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 21:42:11.240769 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 21:42:11.240982 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 21:42:11.259460 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 21:42:11.272665 systemd-logind[1666]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Aug 5 21:42:11.278718 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 21:42:11.278877 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 21:42:11.279057 systemd-logind[1666]: New seat seat0. Aug 5 21:42:11.297622 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 21:42:11.338245 (ntainerd)[1701]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 21:42:11.346862 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 21:42:11.346892 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 21:42:11.381821 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 21:42:11.381846 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 21:42:11.435323 systemd[1]: Started update-engine.service - Update Engine. Aug 5 21:42:11.459424 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 21:42:11.495408 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 21:42:11.531037 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 21:42:11.574103 bash[1759]: Updated "/home/core/.ssh/authorized_keys" Aug 5 21:42:11.581190 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 21:42:11.593550 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 21:42:11.844524 locksmithd[1749]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 21:42:11.876591 sshd_keygen[1674]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 21:42:11.921057 containerd[1701]: time="2024-08-05T21:42:11.916140700Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 21:42:11.927050 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 21:42:11.948614 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 21:42:11.958726 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 5 21:42:11.962697 containerd[1701]: time="2024-08-05T21:42:11.962645220Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 21:42:11.962860 containerd[1701]: time="2024-08-05T21:42:11.962843980Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:42:11.964795 containerd[1701]: time="2024-08-05T21:42:11.964757260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:42:11.964879 containerd[1701]: time="2024-08-05T21:42:11.964866180Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:42:11.965173 containerd[1701]: time="2024-08-05T21:42:11.965133260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:42:11.966152 containerd[1701]: time="2024-08-05T21:42:11.966127020Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 21:42:11.966338 containerd[1701]: time="2024-08-05T21:42:11.966320340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 21:42:11.966455 containerd[1701]: time="2024-08-05T21:42:11.966437620Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:42:11.966506 containerd[1701]: time="2024-08-05T21:42:11.966494460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 21:42:11.966628 containerd[1701]: time="2024-08-05T21:42:11.966613420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:42:11.967971 containerd[1701]: time="2024-08-05T21:42:11.967660620Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 21:42:11.967971 containerd[1701]: time="2024-08-05T21:42:11.967691980Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 21:42:11.967971 containerd[1701]: time="2024-08-05T21:42:11.967704220Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 21:42:11.967971 containerd[1701]: time="2024-08-05T21:42:11.967832940Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 21:42:11.967971 containerd[1701]: time="2024-08-05T21:42:11.967847420Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 21:42:11.967971 containerd[1701]: time="2024-08-05T21:42:11.967926500Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 21:42:11.967971 containerd[1701]: time="2024-08-05T21:42:11.967941740Z" level=info msg="metadata content store policy set" policy=shared Aug 5 21:42:11.984449 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 21:42:11.985597 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 21:42:11.991009 containerd[1701]: time="2024-08-05T21:42:11.990967660Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 21:42:11.991086 containerd[1701]: time="2024-08-05T21:42:11.991018140Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 21:42:11.991086 containerd[1701]: time="2024-08-05T21:42:11.991032620Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 21:42:11.991086 containerd[1701]: time="2024-08-05T21:42:11.991066580Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 21:42:11.991086 containerd[1701]: time="2024-08-05T21:42:11.991083340Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 21:42:11.991207 containerd[1701]: time="2024-08-05T21:42:11.991094820Z" level=info msg="NRI interface is disabled by configuration." Aug 5 21:42:11.991207 containerd[1701]: time="2024-08-05T21:42:11.991107180Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 21:42:11.991311 containerd[1701]: time="2024-08-05T21:42:11.991272700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 21:42:11.991311 containerd[1701]: time="2024-08-05T21:42:11.991295700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 21:42:11.991362 containerd[1701]: time="2024-08-05T21:42:11.991312020Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 21:42:11.991362 containerd[1701]: time="2024-08-05T21:42:11.991328100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 21:42:11.991362 containerd[1701]: time="2024-08-05T21:42:11.991343100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.991413 containerd[1701]: time="2024-08-05T21:42:11.991360900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.991413 containerd[1701]: time="2024-08-05T21:42:11.991374500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.991413 containerd[1701]: time="2024-08-05T21:42:11.991391980Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.991413 containerd[1701]: time="2024-08-05T21:42:11.991406740Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.991480 containerd[1701]: time="2024-08-05T21:42:11.991421020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.991480 containerd[1701]: time="2024-08-05T21:42:11.991435180Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.991480 containerd[1701]: time="2024-08-05T21:42:11.991447380Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 21:42:11.991591 containerd[1701]: time="2024-08-05T21:42:11.991549900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 21:42:11.995811 containerd[1701]: time="2024-08-05T21:42:11.995764220Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 21:42:11.996851 containerd[1701]: time="2024-08-05T21:42:11.996330940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.996851 containerd[1701]: time="2024-08-05T21:42:11.996360980Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 21:42:11.996851 containerd[1701]: time="2024-08-05T21:42:11.996390780Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.997985420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998022140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998037500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998056740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998071100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998085140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998098220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998111020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998124820Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998297780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998318220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998332060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998345900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998358740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 21:42:11.999688 containerd[1701]: time="2024-08-05T21:42:11.998374900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 21:42:12.000061 containerd[1701]: time="2024-08-05T21:42:11.998387300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 21:42:12.000061 containerd[1701]: time="2024-08-05T21:42:11.998399020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 21:42:12.000101 containerd[1701]: time="2024-08-05T21:42:11.998650100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 21:42:12.000101 containerd[1701]: time="2024-08-05T21:42:11.998705900Z" level=info msg="Connect containerd service" Aug 5 21:42:12.000101 containerd[1701]: time="2024-08-05T21:42:11.998746980Z" level=info msg="using legacy CRI server" Aug 5 21:42:12.000101 containerd[1701]: time="2024-08-05T21:42:11.998754500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 21:42:12.000101 containerd[1701]: time="2024-08-05T21:42:11.998833780Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 21:42:12.004149 containerd[1701]: time="2024-08-05T21:42:12.004115220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:42:12.004431 containerd[1701]: time="2024-08-05T21:42:12.004409500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 21:42:12.004513 containerd[1701]: time="2024-08-05T21:42:12.004497460Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 21:42:12.004944 containerd[1701]: time="2024-08-05T21:42:12.004922180Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 21:42:12.006188 containerd[1701]: time="2024-08-05T21:42:12.005964660Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.006847180Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.006909340Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.004501220Z" level=info msg="Start subscribing containerd event" Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.006979500Z" level=info msg="Start recovering state" Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.007050940Z" level=info msg="Start event monitor" Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.007063980Z" level=info msg="Start snapshots syncer" Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.007072980Z" level=info msg="Start cni network conf syncer for default" Aug 5 21:42:12.007124 containerd[1701]: time="2024-08-05T21:42:12.007080180Z" level=info msg="Start streaming server" Aug 5 21:42:12.008172 containerd[1701]: time="2024-08-05T21:42:12.007803380Z" level=info msg="containerd successfully booted in 0.096283s" Aug 5 21:42:12.012042 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 21:42:12.023742 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 21:42:12.041363 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 5 21:42:12.055454 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 21:42:12.071044 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 21:42:12.083347 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 5 21:42:12.090227 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 21:42:12.152578 tar[1695]: linux-arm64/LICENSE Aug 5 21:42:12.153062 tar[1695]: linux-arm64/README.md Aug 5 21:42:12.164029 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 21:42:12.269501 (kubelet)[1810]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:42:12.269719 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:42:12.276955 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 21:42:12.284513 systemd[1]: Startup finished in 676ms (kernel) + 13.629s (initrd) + 11.085s (userspace) = 25.391s. Aug 5 21:42:12.517597 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 21:42:12.522848 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 21:42:12.541818 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 21:42:12.547449 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 21:42:12.551075 systemd-logind[1666]: New session 1 of user core. Aug 5 21:42:12.557791 systemd-logind[1666]: New session 2 of user core. Aug 5 21:42:12.566020 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 21:42:12.573492 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 21:42:12.578669 (systemd)[1821]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:12.754566 kubelet[1810]: E0805 21:42:12.754492 1810 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:42:12.756453 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:42:12.756577 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:42:12.765131 systemd[1821]: Queued start job for default target default.target. Aug 5 21:42:12.771078 systemd[1821]: Created slice app.slice - User Application Slice. Aug 5 21:42:12.771110 systemd[1821]: Reached target paths.target - Paths. Aug 5 21:42:12.771123 systemd[1821]: Reached target timers.target - Timers. Aug 5 21:42:12.773311 systemd[1821]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 21:42:12.793380 systemd[1821]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 21:42:12.793453 systemd[1821]: Reached target sockets.target - Sockets. Aug 5 21:42:12.793465 systemd[1821]: Reached target basic.target - Basic System. Aug 5 21:42:12.793514 systemd[1821]: Reached target default.target - Main User Target. Aug 5 21:42:12.793540 systemd[1821]: Startup finished in 208ms. Aug 5 21:42:12.794201 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 21:42:12.801324 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 21:42:12.802034 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 21:42:13.733728 waagent[1798]: 2024-08-05T21:42:13.733632Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 5 21:42:13.739729 waagent[1798]: 2024-08-05T21:42:13.739645Z INFO Daemon Daemon OS: flatcar 4012.1.0 Aug 5 21:42:13.744343 waagent[1798]: 2024-08-05T21:42:13.744272Z INFO Daemon Daemon Python: 3.11.9 Aug 5 21:42:13.748820 waagent[1798]: 2024-08-05T21:42:13.748715Z INFO Daemon Daemon Run daemon Aug 5 21:42:13.752965 waagent[1798]: 2024-08-05T21:42:13.752907Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.1.0' Aug 5 21:42:13.761807 waagent[1798]: 2024-08-05T21:42:13.761671Z INFO Daemon Daemon Using waagent for provisioning Aug 5 21:42:13.767083 waagent[1798]: 2024-08-05T21:42:13.767021Z INFO Daemon Daemon Activate resource disk Aug 5 21:42:13.771722 waagent[1798]: 2024-08-05T21:42:13.771660Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 5 21:42:13.782761 waagent[1798]: 2024-08-05T21:42:13.782688Z INFO Daemon Daemon Found device: None Aug 5 21:42:13.787233 waagent[1798]: 2024-08-05T21:42:13.787176Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 5 21:42:13.795525 waagent[1798]: 2024-08-05T21:42:13.795461Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 5 21:42:13.808495 waagent[1798]: 2024-08-05T21:42:13.808430Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 5 21:42:13.814238 waagent[1798]: 2024-08-05T21:42:13.814181Z INFO Daemon Daemon Running default provisioning handler Aug 5 21:42:13.826643 waagent[1798]: 2024-08-05T21:42:13.826057Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 5 21:42:13.840270 waagent[1798]: 2024-08-05T21:42:13.840201Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 5 21:42:13.849825 waagent[1798]: 2024-08-05T21:42:13.849757Z INFO Daemon Daemon cloud-init is enabled: False Aug 5 21:42:13.854866 waagent[1798]: 2024-08-05T21:42:13.854807Z INFO Daemon Daemon Copying ovf-env.xml Aug 5 21:42:13.942113 waagent[1798]: 2024-08-05T21:42:13.941307Z INFO Daemon Daemon Successfully mounted dvd Aug 5 21:42:13.970252 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 5 21:42:13.972482 waagent[1798]: 2024-08-05T21:42:13.972399Z INFO Daemon Daemon Detect protocol endpoint Aug 5 21:42:13.977526 waagent[1798]: 2024-08-05T21:42:13.977458Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 5 21:42:13.983569 waagent[1798]: 2024-08-05T21:42:13.983507Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 5 21:42:13.990677 waagent[1798]: 2024-08-05T21:42:13.990583Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 5 21:42:13.996138 waagent[1798]: 2024-08-05T21:42:13.996078Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 5 21:42:14.001536 waagent[1798]: 2024-08-05T21:42:14.001473Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 5 21:42:14.050269 waagent[1798]: 2024-08-05T21:42:14.050214Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 5 21:42:14.057057 waagent[1798]: 2024-08-05T21:42:14.057024Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 5 21:42:14.062498 waagent[1798]: 2024-08-05T21:42:14.062432Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 5 21:42:14.292244 waagent[1798]: 2024-08-05T21:42:14.288221Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 5 21:42:14.294801 waagent[1798]: 2024-08-05T21:42:14.294732Z INFO Daemon Daemon Forcing an update of the goal state. Aug 5 21:42:14.304005 waagent[1798]: 2024-08-05T21:42:14.303950Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 5 21:42:14.324425 waagent[1798]: 2024-08-05T21:42:14.324378Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Aug 5 21:42:14.330105 waagent[1798]: 2024-08-05T21:42:14.330055Z INFO Daemon Aug 5 21:42:14.333070 waagent[1798]: 2024-08-05T21:42:14.333022Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: b2abef54-fadd-4211-886b-ff1ec7f809f8 eTag: 2684351440708050968 source: Fabric] Aug 5 21:42:14.344284 waagent[1798]: 2024-08-05T21:42:14.344238Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 5 21:42:14.351486 waagent[1798]: 2024-08-05T21:42:14.351439Z INFO Daemon Aug 5 21:42:14.354317 waagent[1798]: 2024-08-05T21:42:14.354274Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 5 21:42:14.365199 waagent[1798]: 2024-08-05T21:42:14.365142Z INFO Daemon Daemon Downloading artifacts profile blob Aug 5 21:42:14.452216 waagent[1798]: 2024-08-05T21:42:14.452092Z INFO Daemon Downloaded certificate {'thumbprint': 'CFB0F763B90CAD196D4B4844EA4B48CE7C8F8CE2', 'hasPrivateKey': False} Aug 5 21:42:14.462271 waagent[1798]: 2024-08-05T21:42:14.462218Z INFO Daemon Downloaded certificate {'thumbprint': 'C68A4F6083E2D643DA43BC2709BBBAEA4E444317', 'hasPrivateKey': True} Aug 5 21:42:14.471999 waagent[1798]: 2024-08-05T21:42:14.471945Z INFO Daemon Fetch goal state completed Aug 5 21:42:14.483263 waagent[1798]: 2024-08-05T21:42:14.483214Z INFO Daemon Daemon Starting provisioning Aug 5 21:42:14.488246 waagent[1798]: 2024-08-05T21:42:14.488180Z INFO Daemon Daemon Handle ovf-env.xml. Aug 5 21:42:14.492777 waagent[1798]: 2024-08-05T21:42:14.492718Z INFO Daemon Daemon Set hostname [ci-4012.1.0-a-183bdb833d] Aug 5 21:42:14.531701 waagent[1798]: 2024-08-05T21:42:14.531626Z INFO Daemon Daemon Publish hostname [ci-4012.1.0-a-183bdb833d] Aug 5 21:42:14.540198 waagent[1798]: 2024-08-05T21:42:14.538202Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 5 21:42:14.544660 waagent[1798]: 2024-08-05T21:42:14.544552Z INFO Daemon Daemon Primary interface is [eth0] Aug 5 21:42:14.589440 waagent[1798]: 2024-08-05T21:42:14.589021Z INFO Daemon Daemon Create user account if not exists Aug 5 21:42:14.589173 systemd-networkd[1451]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 21:42:14.589177 systemd-networkd[1451]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 21:42:14.589206 systemd-networkd[1451]: eth0: DHCP lease lost Aug 5 21:42:14.594975 waagent[1798]: 2024-08-05T21:42:14.594806Z INFO Daemon Daemon User core already exists, skip useradd Aug 5 21:42:14.600944 waagent[1798]: 2024-08-05T21:42:14.600868Z INFO Daemon Daemon Configure sudoer Aug 5 21:42:14.605257 systemd-networkd[1451]: eth0: DHCPv6 lease lost Aug 5 21:42:14.605813 waagent[1798]: 2024-08-05T21:42:14.605710Z INFO Daemon Daemon Configure sshd Aug 5 21:42:14.610409 waagent[1798]: 2024-08-05T21:42:14.610342Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 5 21:42:14.623291 waagent[1798]: 2024-08-05T21:42:14.623220Z INFO Daemon Daemon Deploy ssh public key. Aug 5 21:42:14.639312 systemd-networkd[1451]: eth0: DHCPv4 address 10.200.20.35/24, gateway 10.200.20.1 acquired from 168.63.129.16 Aug 5 21:42:15.880247 waagent[1798]: 2024-08-05T21:42:15.875556Z INFO Daemon Daemon Provisioning complete Aug 5 21:42:15.897732 waagent[1798]: 2024-08-05T21:42:15.897683Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 5 21:42:15.903798 waagent[1798]: 2024-08-05T21:42:15.903733Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 5 21:42:15.913429 waagent[1798]: 2024-08-05T21:42:15.913371Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 5 21:42:16.049827 waagent[1873]: 2024-08-05T21:42:16.049748Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 5 21:42:16.050686 waagent[1873]: 2024-08-05T21:42:16.050278Z INFO ExtHandler ExtHandler OS: flatcar 4012.1.0 Aug 5 21:42:16.050686 waagent[1873]: 2024-08-05T21:42:16.050359Z INFO ExtHandler ExtHandler Python: 3.11.9 Aug 5 21:42:16.117192 waagent[1873]: 2024-08-05T21:42:16.115153Z INFO ExtHandler ExtHandler Distro: flatcar-4012.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 5 21:42:16.117192 waagent[1873]: 2024-08-05T21:42:16.115415Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 21:42:16.117192 waagent[1873]: 2024-08-05T21:42:16.115478Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 21:42:16.123807 waagent[1873]: 2024-08-05T21:42:16.123727Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 5 21:42:16.130004 waagent[1873]: 2024-08-05T21:42:16.129951Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Aug 5 21:42:16.130634 waagent[1873]: 2024-08-05T21:42:16.130556Z INFO ExtHandler Aug 5 21:42:16.130690 waagent[1873]: 2024-08-05T21:42:16.130657Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6bcc0589-f69a-4b98-ba94-eb74facf7fcc eTag: 2684351440708050968 source: Fabric] Aug 5 21:42:16.131003 waagent[1873]: 2024-08-05T21:42:16.130961Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 5 21:42:16.131647 waagent[1873]: 2024-08-05T21:42:16.131593Z INFO ExtHandler Aug 5 21:42:16.131719 waagent[1873]: 2024-08-05T21:42:16.131688Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 5 21:42:16.136654 waagent[1873]: 2024-08-05T21:42:16.136611Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 5 21:42:16.219229 waagent[1873]: 2024-08-05T21:42:16.219109Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFB0F763B90CAD196D4B4844EA4B48CE7C8F8CE2', 'hasPrivateKey': False} Aug 5 21:42:16.219692 waagent[1873]: 2024-08-05T21:42:16.219641Z INFO ExtHandler Downloaded certificate {'thumbprint': 'C68A4F6083E2D643DA43BC2709BBBAEA4E444317', 'hasPrivateKey': True} Aug 5 21:42:16.220213 waagent[1873]: 2024-08-05T21:42:16.220133Z INFO ExtHandler Fetch goal state completed Aug 5 21:42:16.238382 waagent[1873]: 2024-08-05T21:42:16.238316Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1873 Aug 5 21:42:16.238555 waagent[1873]: 2024-08-05T21:42:16.238514Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 5 21:42:16.240304 waagent[1873]: 2024-08-05T21:42:16.240252Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.1.0', '', 'Flatcar Container Linux by Kinvolk'] Aug 5 21:42:16.240717 waagent[1873]: 2024-08-05T21:42:16.240677Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 5 21:42:16.257618 waagent[1873]: 2024-08-05T21:42:16.257569Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 5 21:42:16.257816 waagent[1873]: 2024-08-05T21:42:16.257776Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 5 21:42:16.264052 waagent[1873]: 2024-08-05T21:42:16.264015Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 5 21:42:16.271657 systemd[1]: Reloading requested from client PID 1888 ('systemctl') (unit waagent.service)... Aug 5 21:42:16.271675 systemd[1]: Reloading... Aug 5 21:42:16.364195 zram_generator::config[1922]: No configuration found. Aug 5 21:42:16.465073 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:42:16.545429 systemd[1]: Reloading finished in 273 ms. Aug 5 21:42:16.572179 waagent[1873]: 2024-08-05T21:42:16.569495Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 5 21:42:16.576798 systemd[1]: Reloading requested from client PID 1973 ('systemctl') (unit waagent.service)... Aug 5 21:42:16.576966 systemd[1]: Reloading... Aug 5 21:42:16.670184 zram_generator::config[2008]: No configuration found. Aug 5 21:42:16.773404 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:42:16.853205 systemd[1]: Reloading finished in 275 ms. Aug 5 21:42:16.877226 waagent[1873]: 2024-08-05T21:42:16.875424Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 5 21:42:16.877226 waagent[1873]: 2024-08-05T21:42:16.875607Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 5 21:42:22.559918 waagent[1873]: 2024-08-05T21:42:22.559818Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 5 21:42:22.560567 waagent[1873]: 2024-08-05T21:42:22.560509Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 5 21:42:22.561449 waagent[1873]: 2024-08-05T21:42:22.561357Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 5 21:42:22.562051 waagent[1873]: 2024-08-05T21:42:22.561870Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 5 21:42:22.562349 waagent[1873]: 2024-08-05T21:42:22.562303Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 21:42:22.563184 waagent[1873]: 2024-08-05T21:42:22.562424Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 21:42:22.563184 waagent[1873]: 2024-08-05T21:42:22.562511Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 21:42:22.563184 waagent[1873]: 2024-08-05T21:42:22.562655Z INFO EnvHandler ExtHandler Configure routes Aug 5 21:42:22.563184 waagent[1873]: 2024-08-05T21:42:22.562716Z INFO EnvHandler ExtHandler Gateway:None Aug 5 21:42:22.563184 waagent[1873]: 2024-08-05T21:42:22.562760Z INFO EnvHandler ExtHandler Routes:None Aug 5 21:42:22.563556 waagent[1873]: 2024-08-05T21:42:22.563398Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 5 21:42:22.563905 waagent[1873]: 2024-08-05T21:42:22.563619Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 5 21:42:22.564148 waagent[1873]: 2024-08-05T21:42:22.564104Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 21:42:22.564425 waagent[1873]: 2024-08-05T21:42:22.564382Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 5 21:42:22.564746 waagent[1873]: 2024-08-05T21:42:22.564637Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 5 21:42:22.565373 waagent[1873]: 2024-08-05T21:42:22.565321Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 5 21:42:22.566042 waagent[1873]: 2024-08-05T21:42:22.565997Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 5 21:42:22.568443 waagent[1873]: 2024-08-05T21:42:22.568400Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 5 21:42:22.568443 waagent[1873]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 5 21:42:22.568443 waagent[1873]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Aug 5 21:42:22.568443 waagent[1873]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 5 21:42:22.568443 waagent[1873]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 5 21:42:22.568443 waagent[1873]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 5 21:42:22.568443 waagent[1873]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 5 21:42:22.579178 waagent[1873]: 2024-08-05T21:42:22.578054Z INFO ExtHandler ExtHandler Aug 5 21:42:22.579178 waagent[1873]: 2024-08-05T21:42:22.578201Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3457553c-b00c-48ff-85c0-8a2b6e25d377 correlation adeb72ce-3bd7-4116-aee5-5870ee49fa5c created: 2024-08-05T21:41:01.271446Z] Aug 5 21:42:22.579178 waagent[1873]: 2024-08-05T21:42:22.578592Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 5 21:42:22.579328 waagent[1873]: 2024-08-05T21:42:22.579179Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Aug 5 21:42:22.615107 waagent[1873]: 2024-08-05T21:42:22.614978Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 333B40B2-6FEE-4B17-80EB-32541E0BB30A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 5 21:42:22.646829 waagent[1873]: 2024-08-05T21:42:22.646753Z INFO MonitorHandler ExtHandler Network interfaces: Aug 5 21:42:22.646829 waagent[1873]: Executing ['ip', '-a', '-o', 'link']: Aug 5 21:42:22.646829 waagent[1873]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 5 21:42:22.646829 waagent[1873]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:7f:2f brd ff:ff:ff:ff:ff:ff Aug 5 21:42:22.646829 waagent[1873]: 3: enP7601s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:79:7f:2f brd ff:ff:ff:ff:ff:ff\ altname enP7601p0s2 Aug 5 21:42:22.646829 waagent[1873]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 5 21:42:22.646829 waagent[1873]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 5 21:42:22.646829 waagent[1873]: 2: eth0 inet 10.200.20.35/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 5 21:42:22.646829 waagent[1873]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 5 21:42:22.646829 waagent[1873]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 5 21:42:22.646829 waagent[1873]: 2: eth0 inet6 fe80::222:48ff:fe79:7f2f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 5 21:42:22.646829 waagent[1873]: 3: enP7601s1 inet6 fe80::222:48ff:fe79:7f2f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 5 21:42:22.686866 waagent[1873]: 2024-08-05T21:42:22.686351Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 5 21:42:22.686866 waagent[1873]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 21:42:22.686866 waagent[1873]: pkts bytes target prot opt in out source destination Aug 5 21:42:22.686866 waagent[1873]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 5 21:42:22.686866 waagent[1873]: pkts bytes target prot opt in out source destination Aug 5 21:42:22.686866 waagent[1873]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 21:42:22.686866 waagent[1873]: pkts bytes target prot opt in out source destination Aug 5 21:42:22.686866 waagent[1873]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 5 21:42:22.686866 waagent[1873]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 5 21:42:22.686866 waagent[1873]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 5 21:42:22.691223 waagent[1873]: 2024-08-05T21:42:22.691120Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 5 21:42:22.691223 waagent[1873]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 21:42:22.691223 waagent[1873]: pkts bytes target prot opt in out source destination Aug 5 21:42:22.691223 waagent[1873]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 5 21:42:22.691223 waagent[1873]: pkts bytes target prot opt in out source destination Aug 5 21:42:22.691223 waagent[1873]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 21:42:22.691223 waagent[1873]: pkts bytes target prot opt in out source destination Aug 5 21:42:22.691223 waagent[1873]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 5 21:42:22.691223 waagent[1873]: 5 467 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 5 21:42:22.691223 waagent[1873]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 5 21:42:22.691511 waagent[1873]: 2024-08-05T21:42:22.691468Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 5 21:42:22.902945 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 21:42:22.910362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:42:25.484714 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:42:25.489426 (kubelet)[2103]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:42:25.558445 kubelet[2103]: E0805 21:42:25.558382 2103 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:42:25.562833 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:42:25.562987 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:42:34.919307 chronyd[1657]: Selected source PHC0 Aug 5 21:42:35.653061 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 21:42:35.659358 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:42:35.856842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:42:35.863479 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:42:36.295943 kubelet[2119]: E0805 21:42:36.295880 2119 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:42:36.298290 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:42:36.298411 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:42:38.486148 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 21:42:38.487300 systemd[1]: Started sshd@0-10.200.20.35:22-10.200.16.10:37150.service - OpenSSH per-connection server daemon (10.200.16.10:37150). Aug 5 21:42:39.024594 sshd[2129]: Accepted publickey for core from 10.200.16.10 port 37150 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:42:39.025918 sshd[2129]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:39.029858 systemd-logind[1666]: New session 3 of user core. Aug 5 21:42:39.041331 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 21:42:39.445506 systemd[1]: Started sshd@1-10.200.20.35:22-10.200.16.10:58728.service - OpenSSH per-connection server daemon (10.200.16.10:58728). Aug 5 21:42:39.907762 sshd[2134]: Accepted publickey for core from 10.200.16.10 port 58728 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:42:39.909064 sshd[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:39.912743 systemd-logind[1666]: New session 4 of user core. Aug 5 21:42:39.918301 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 21:42:40.254873 sshd[2134]: pam_unix(sshd:session): session closed for user core Aug 5 21:42:40.257823 systemd[1]: sshd@1-10.200.20.35:22-10.200.16.10:58728.service: Deactivated successfully. Aug 5 21:42:40.259847 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 21:42:40.261078 systemd-logind[1666]: Session 4 logged out. Waiting for processes to exit. Aug 5 21:42:40.262497 systemd-logind[1666]: Removed session 4. Aug 5 21:42:40.334113 systemd[1]: Started sshd@2-10.200.20.35:22-10.200.16.10:58738.service - OpenSSH per-connection server daemon (10.200.16.10:58738). Aug 5 21:42:40.759324 sshd[2141]: Accepted publickey for core from 10.200.16.10 port 58738 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:42:40.760630 sshd[2141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:40.765337 systemd-logind[1666]: New session 5 of user core. Aug 5 21:42:40.771361 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 21:42:41.070330 sshd[2141]: pam_unix(sshd:session): session closed for user core Aug 5 21:42:41.073497 systemd[1]: sshd@2-10.200.20.35:22-10.200.16.10:58738.service: Deactivated successfully. Aug 5 21:42:41.075300 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 21:42:41.076205 systemd-logind[1666]: Session 5 logged out. Waiting for processes to exit. Aug 5 21:42:41.077072 systemd-logind[1666]: Removed session 5. Aug 5 21:42:41.158458 systemd[1]: Started sshd@3-10.200.20.35:22-10.200.16.10:58752.service - OpenSSH per-connection server daemon (10.200.16.10:58752). Aug 5 21:42:41.617398 sshd[2148]: Accepted publickey for core from 10.200.16.10 port 58752 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:42:41.618757 sshd[2148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:41.622650 systemd-logind[1666]: New session 6 of user core. Aug 5 21:42:41.629299 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 21:42:41.962622 sshd[2148]: pam_unix(sshd:session): session closed for user core Aug 5 21:42:41.965666 systemd[1]: sshd@3-10.200.20.35:22-10.200.16.10:58752.service: Deactivated successfully. Aug 5 21:42:41.967111 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 21:42:41.967961 systemd-logind[1666]: Session 6 logged out. Waiting for processes to exit. Aug 5 21:42:41.968862 systemd-logind[1666]: Removed session 6. Aug 5 21:42:42.047239 systemd[1]: Started sshd@4-10.200.20.35:22-10.200.16.10:58762.service - OpenSSH per-connection server daemon (10.200.16.10:58762). Aug 5 21:42:42.512104 sshd[2155]: Accepted publickey for core from 10.200.16.10 port 58762 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:42:42.513440 sshd[2155]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:42.518296 systemd-logind[1666]: New session 7 of user core. Aug 5 21:42:42.523374 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 21:42:42.916996 sudo[2158]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 21:42:42.917267 sudo[2158]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:42:42.942326 sudo[2158]: pam_unix(sudo:session): session closed for user root Aug 5 21:42:43.010180 sshd[2155]: pam_unix(sshd:session): session closed for user core Aug 5 21:42:43.013973 systemd[1]: sshd@4-10.200.20.35:22-10.200.16.10:58762.service: Deactivated successfully. Aug 5 21:42:43.015998 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 21:42:43.016681 systemd-logind[1666]: Session 7 logged out. Waiting for processes to exit. Aug 5 21:42:43.017595 systemd-logind[1666]: Removed session 7. Aug 5 21:42:43.093780 systemd[1]: Started sshd@5-10.200.20.35:22-10.200.16.10:58772.service - OpenSSH per-connection server daemon (10.200.16.10:58772). Aug 5 21:42:43.554407 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 58772 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:42:43.555805 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:43.560548 systemd-logind[1666]: New session 8 of user core. Aug 5 21:42:43.562349 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 21:42:43.818153 sudo[2167]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 21:42:43.818799 sudo[2167]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:42:43.821851 sudo[2167]: pam_unix(sudo:session): session closed for user root Aug 5 21:42:43.826492 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 21:42:43.826718 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:42:43.849662 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 21:42:43.850832 auditctl[2170]: No rules Aug 5 21:42:43.851145 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 21:42:43.851500 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 21:42:43.853836 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 21:42:43.877329 augenrules[2188]: No rules Aug 5 21:42:43.878867 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 21:42:43.880234 sudo[2166]: pam_unix(sudo:session): session closed for user root Aug 5 21:42:43.948444 sshd[2163]: pam_unix(sshd:session): session closed for user core Aug 5 21:42:43.950991 systemd[1]: sshd@5-10.200.20.35:22-10.200.16.10:58772.service: Deactivated successfully. Aug 5 21:42:43.952748 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 21:42:43.954279 systemd-logind[1666]: Session 8 logged out. Waiting for processes to exit. Aug 5 21:42:43.955622 systemd-logind[1666]: Removed session 8. Aug 5 21:42:44.032337 systemd[1]: Started sshd@6-10.200.20.35:22-10.200.16.10:58786.service - OpenSSH per-connection server daemon (10.200.16.10:58786). Aug 5 21:42:44.496299 sshd[2196]: Accepted publickey for core from 10.200.16.10 port 58786 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:42:44.497590 sshd[2196]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:42:44.501326 systemd-logind[1666]: New session 9 of user core. Aug 5 21:42:44.513310 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 21:42:44.761768 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 21:42:44.761998 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 21:42:45.221413 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 21:42:45.222787 (dockerd)[2208]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 21:42:45.819844 dockerd[2208]: time="2024-08-05T21:42:45.819783950Z" level=info msg="Starting up" Aug 5 21:42:45.867958 dockerd[2208]: time="2024-08-05T21:42:45.867876944Z" level=info msg="Loading containers: start." Aug 5 21:42:46.040189 kernel: Initializing XFRM netlink socket Aug 5 21:42:46.169812 systemd-networkd[1451]: docker0: Link UP Aug 5 21:42:46.197942 dockerd[2208]: time="2024-08-05T21:42:46.197471851Z" level=info msg="Loading containers: done." Aug 5 21:42:46.402903 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 21:42:46.410345 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:42:47.200499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:42:47.216531 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:42:47.258870 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:42:47.379792 kubelet[2309]: E0805 21:42:47.257018 2309 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:42:47.258992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:42:47.380989 dockerd[2208]: time="2024-08-05T21:42:47.380482630Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 21:42:47.380989 dockerd[2208]: time="2024-08-05T21:42:47.380666390Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 21:42:47.380989 dockerd[2208]: time="2024-08-05T21:42:47.380780390Z" level=info msg="Daemon has completed initialization" Aug 5 21:42:47.468525 dockerd[2208]: time="2024-08-05T21:42:47.467891764Z" level=info msg="API listen on /run/docker.sock" Aug 5 21:42:47.469430 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 21:42:48.947097 containerd[1701]: time="2024-08-05T21:42:48.947044073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\"" Aug 5 21:42:50.066481 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2113719054.mount: Deactivated successfully. Aug 5 21:42:51.715127 containerd[1701]: time="2024-08-05T21:42:51.715042440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:51.717508 containerd[1701]: time="2024-08-05T21:42:51.717474484Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.7: active requests=0, bytes read=32285111" Aug 5 21:42:51.720196 containerd[1701]: time="2024-08-05T21:42:51.720004809Z" level=info msg="ImageCreate event name:\"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:51.724352 containerd[1701]: time="2024-08-05T21:42:51.724298377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:51.725636 containerd[1701]: time="2024-08-05T21:42:51.725363419Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.7\" with image id \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:7b104771c13b9e3537846c3f6949000785e1fbc66d07f123ebcea22c8eb918b3\", size \"32281911\" in 2.778273906s" Aug 5 21:42:51.725636 containerd[1701]: time="2024-08-05T21:42:51.725402059Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.7\" returns image reference \"sha256:09da0e2c1634057a9cb3d1ab3187c1e87431acaae308ee0504a9f637fc1b1165\"" Aug 5 21:42:51.744844 containerd[1701]: time="2024-08-05T21:42:51.744782976Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\"" Aug 5 21:42:53.500844 containerd[1701]: time="2024-08-05T21:42:53.500782024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:53.502941 containerd[1701]: time="2024-08-05T21:42:53.502906388Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.7: active requests=0, bytes read=29362251" Aug 5 21:42:53.506076 containerd[1701]: time="2024-08-05T21:42:53.506044874Z" level=info msg="ImageCreate event name:\"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:53.511717 containerd[1701]: time="2024-08-05T21:42:53.511641924Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:53.513015 containerd[1701]: time="2024-08-05T21:42:53.512887887Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.7\" with image id \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e3356f078f7ce72984385d4ca5e726a8cb05ce355d6b158f41aa9b5dbaff9b19\", size \"30849518\" in 1.768016591s" Aug 5 21:42:53.513015 containerd[1701]: time="2024-08-05T21:42:53.512925367Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.7\" returns image reference \"sha256:42d71ec0804ba94e173cb2bf05d873aad38ec4db300c158498d54f2b8c8368d1\"" Aug 5 21:42:53.534378 containerd[1701]: time="2024-08-05T21:42:53.534262887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\"" Aug 5 21:42:54.690242 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Aug 5 21:42:55.191192 containerd[1701]: time="2024-08-05T21:42:55.191109987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:55.193943 containerd[1701]: time="2024-08-05T21:42:55.193901272Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.7: active requests=0, bytes read=15751349" Aug 5 21:42:55.198083 containerd[1701]: time="2024-08-05T21:42:55.198024880Z" level=info msg="ImageCreate event name:\"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:55.203776 containerd[1701]: time="2024-08-05T21:42:55.203708611Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:55.204993 containerd[1701]: time="2024-08-05T21:42:55.204855693Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.7\" with image id \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c6203fbc102cc80a7d934946b7eacb7491480a65db56db203cb3035deecaaa39\", size \"17238634\" in 1.670552246s" Aug 5 21:42:55.204993 containerd[1701]: time="2024-08-05T21:42:55.204895853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.7\" returns image reference \"sha256:aa0debff447ecc9a9254154628d35be75d6ddcf6f680bc2672e176729f16ac03\"" Aug 5 21:42:55.225230 containerd[1701]: time="2024-08-05T21:42:55.225187692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\"" Aug 5 21:42:56.712238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4025348371.mount: Deactivated successfully. Aug 5 21:42:56.894378 update_engine[1676]: I0805 21:42:56.894323 1676 update_attempter.cc:509] Updating boot flags... Aug 5 21:42:57.060882 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2450) Aug 5 21:42:57.389946 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 5 21:42:57.400462 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:42:57.435146 containerd[1701]: time="2024-08-05T21:42:57.434475758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:57.462424 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2443) Aug 5 21:42:57.473498 containerd[1701]: time="2024-08-05T21:42:57.473456832Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.7: active requests=0, bytes read=25251732" Aug 5 21:42:57.522185 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2443) Aug 5 21:42:57.716780 containerd[1701]: time="2024-08-05T21:42:57.715678131Z" level=info msg="ImageCreate event name:\"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:57.740973 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:42:57.745541 (kubelet)[2540]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:42:57.792089 kubelet[2540]: E0805 21:42:57.792002 2540 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:42:57.794651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:42:57.794803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:42:57.874339 containerd[1701]: time="2024-08-05T21:42:57.874255752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:42:57.875229 containerd[1701]: time="2024-08-05T21:42:57.875081913Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.7\" with image id \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\", repo tag \"registry.k8s.io/kube-proxy:v1.29.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:4d5e787d71c41243379cbb323d2b3a920fa50825cab19d20ef3344a808d18c4e\", size \"25250751\" in 2.649854381s" Aug 5 21:42:57.875229 containerd[1701]: time="2024-08-05T21:42:57.875120954Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.7\" returns image reference \"sha256:25c9adc8cf12a1aec7e02751b8e9faca4907a0551a6d16c425e576622fdb59db\"" Aug 5 21:42:57.894910 containerd[1701]: time="2024-08-05T21:42:57.894853911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Aug 5 21:42:58.600801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1255455985.mount: Deactivated successfully. Aug 5 21:43:01.425188 containerd[1701]: time="2024-08-05T21:43:01.424906132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:01.427628 containerd[1701]: time="2024-08-05T21:43:01.427592093Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Aug 5 21:43:01.432029 containerd[1701]: time="2024-08-05T21:43:01.431994335Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:01.440635 containerd[1701]: time="2024-08-05T21:43:01.439315258Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:01.440635 containerd[1701]: time="2024-08-05T21:43:01.440213219Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 3.545319628s" Aug 5 21:43:01.440635 containerd[1701]: time="2024-08-05T21:43:01.440247739Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Aug 5 21:43:01.460819 containerd[1701]: time="2024-08-05T21:43:01.460704629Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 21:43:02.102557 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263800386.mount: Deactivated successfully. Aug 5 21:43:02.129199 containerd[1701]: time="2024-08-05T21:43:02.128712865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:02.131846 containerd[1701]: time="2024-08-05T21:43:02.131666386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Aug 5 21:43:02.135738 containerd[1701]: time="2024-08-05T21:43:02.135689268Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:02.141351 containerd[1701]: time="2024-08-05T21:43:02.141278511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:02.142101 containerd[1701]: time="2024-08-05T21:43:02.141982511Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 681.225522ms" Aug 5 21:43:02.142101 containerd[1701]: time="2024-08-05T21:43:02.142016951Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Aug 5 21:43:02.161867 containerd[1701]: time="2024-08-05T21:43:02.161747401Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 21:43:02.882978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3004831191.mount: Deactivated successfully. Aug 5 21:43:05.424152 containerd[1701]: time="2024-08-05T21:43:05.424092676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:05.426181 containerd[1701]: time="2024-08-05T21:43:05.426002280Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Aug 5 21:43:05.429661 containerd[1701]: time="2024-08-05T21:43:05.429600967Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:05.437015 containerd[1701]: time="2024-08-05T21:43:05.436942982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:05.438386 containerd[1701]: time="2024-08-05T21:43:05.438258385Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 3.276462664s" Aug 5 21:43:05.438386 containerd[1701]: time="2024-08-05T21:43:05.438297945Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Aug 5 21:43:07.902889 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 5 21:43:07.908525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:43:08.021376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:43:08.024284 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 21:43:08.072002 kubelet[2722]: E0805 21:43:08.071950 2722 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 21:43:08.075188 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 21:43:08.075458 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 21:43:11.383314 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:43:11.394413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:43:11.413691 systemd[1]: Reloading requested from client PID 2736 ('systemctl') (unit session-9.scope)... Aug 5 21:43:11.413850 systemd[1]: Reloading... Aug 5 21:43:11.518243 zram_generator::config[2785]: No configuration found. Aug 5 21:43:11.606374 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:43:11.685512 systemd[1]: Reloading finished in 271 ms. Aug 5 21:43:11.832799 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 21:43:11.832888 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 21:43:11.833449 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:43:11.840476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:43:17.443198 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:43:17.456485 (kubelet)[2837]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:43:17.498888 kubelet[2837]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:43:17.500174 kubelet[2837]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:43:17.500174 kubelet[2837]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:43:17.500174 kubelet[2837]: I0805 21:43:17.499340 2837 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:43:18.425003 kubelet[2837]: I0805 21:43:18.424964 2837 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 21:43:18.425003 kubelet[2837]: I0805 21:43:18.424995 2837 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:43:18.426839 kubelet[2837]: I0805 21:43:18.425641 2837 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 21:43:18.440448 kubelet[2837]: I0805 21:43:18.440419 2837 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:43:18.440913 kubelet[2837]: E0805 21:43:18.440885 2837 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.448311 kubelet[2837]: I0805 21:43:18.448281 2837 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:43:18.448516 kubelet[2837]: I0805 21:43:18.448500 2837 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:43:18.448693 kubelet[2837]: I0805 21:43:18.448675 2837 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:43:18.448779 kubelet[2837]: I0805 21:43:18.448699 2837 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:43:18.448779 kubelet[2837]: I0805 21:43:18.448708 2837 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:43:18.449991 kubelet[2837]: I0805 21:43:18.449966 2837 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:43:18.452127 kubelet[2837]: I0805 21:43:18.452105 2837 kubelet.go:396] "Attempting to sync node with API server" Aug 5 21:43:18.452175 kubelet[2837]: I0805 21:43:18.452136 2837 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:43:18.452371 kubelet[2837]: I0805 21:43:18.452353 2837 kubelet.go:312] "Adding apiserver pod source" Aug 5 21:43:18.452400 kubelet[2837]: I0805 21:43:18.452377 2837 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:43:18.455337 kubelet[2837]: W0805 21:43:18.455268 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-183bdb833d&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.455406 kubelet[2837]: E0805 21:43:18.455345 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-183bdb833d&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.456375 kubelet[2837]: I0805 21:43:18.455466 2837 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:43:18.456375 kubelet[2837]: I0805 21:43:18.455738 2837 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 21:43:18.457714 kubelet[2837]: W0805 21:43:18.456738 2837 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 21:43:18.457714 kubelet[2837]: I0805 21:43:18.457287 2837 server.go:1256] "Started kubelet" Aug 5 21:43:18.460405 kubelet[2837]: W0805 21:43:18.460365 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.460405 kubelet[2837]: E0805 21:43:18.460408 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.462359 kubelet[2837]: E0805 21:43:18.462321 2837 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.1.0-a-183bdb833d.17e8f3252ca06660 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.1.0-a-183bdb833d,UID:ci-4012.1.0-a-183bdb833d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.1.0-a-183bdb833d,},FirstTimestamp:2024-08-05 21:43:18.457263712 +0000 UTC m=+0.997262023,LastTimestamp:2024-08-05 21:43:18.457263712 +0000 UTC m=+0.997262023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.1.0-a-183bdb833d,}" Aug 5 21:43:18.462516 kubelet[2837]: I0805 21:43:18.462499 2837 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 21:43:18.462832 kubelet[2837]: I0805 21:43:18.462798 2837 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:43:18.462881 kubelet[2837]: I0805 21:43:18.462864 2837 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:43:18.463640 kubelet[2837]: I0805 21:43:18.463615 2837 server.go:461] "Adding debug handlers to kubelet server" Aug 5 21:43:18.464912 kubelet[2837]: I0805 21:43:18.464888 2837 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:43:18.468454 kubelet[2837]: I0805 21:43:18.468406 2837 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:43:18.469570 kubelet[2837]: I0805 21:43:18.469544 2837 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:43:18.469770 kubelet[2837]: I0805 21:43:18.469758 2837 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:43:18.472013 kubelet[2837]: W0805 21:43:18.471968 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.472169 kubelet[2837]: E0805 21:43:18.472145 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.472369 kubelet[2837]: E0805 21:43:18.472356 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-183bdb833d?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="200ms" Aug 5 21:43:18.472462 kubelet[2837]: I0805 21:43:18.472435 2837 factory.go:221] Registration of the systemd container factory successfully Aug 5 21:43:18.473135 kubelet[2837]: I0805 21:43:18.473095 2837 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 21:43:18.474768 kubelet[2837]: I0805 21:43:18.474678 2837 factory.go:221] Registration of the containerd container factory successfully Aug 5 21:43:18.478286 kubelet[2837]: E0805 21:43:18.478261 2837 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:43:18.495853 kubelet[2837]: I0805 21:43:18.495826 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:43:18.497817 kubelet[2837]: I0805 21:43:18.497792 2837 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:43:18.498292 kubelet[2837]: I0805 21:43:18.497984 2837 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:43:18.498292 kubelet[2837]: I0805 21:43:18.498013 2837 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 21:43:18.498292 kubelet[2837]: E0805 21:43:18.498062 2837 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:43:18.499402 kubelet[2837]: W0805 21:43:18.499358 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.499680 kubelet[2837]: E0805 21:43:18.499407 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:18.530422 kubelet[2837]: I0805 21:43:18.530240 2837 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:43:18.530422 kubelet[2837]: I0805 21:43:18.530277 2837 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:43:18.530422 kubelet[2837]: I0805 21:43:18.530297 2837 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:43:18.571188 kubelet[2837]: I0805 21:43:18.570970 2837 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:18.571363 kubelet[2837]: E0805 21:43:18.571317 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:18.598653 kubelet[2837]: E0805 21:43:18.598628 2837 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:43:18.673254 kubelet[2837]: E0805 21:43:18.673231 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-183bdb833d?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="400ms" Aug 5 21:43:18.773177 kubelet[2837]: I0805 21:43:18.773027 2837 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:18.774286 kubelet[2837]: E0805 21:43:18.774261 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:18.799400 kubelet[2837]: E0805 21:43:18.799370 2837 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:43:19.073920 kubelet[2837]: E0805 21:43:19.073811 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-183bdb833d?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="800ms" Aug 5 21:43:19.176831 kubelet[2837]: I0805 21:43:19.176775 2837 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:19.177179 kubelet[2837]: E0805 21:43:19.177142 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:19.200280 kubelet[2837]: E0805 21:43:19.200258 2837 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:43:19.275241 kubelet[2837]: W0805 21:43:19.275138 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.275241 kubelet[2837]: E0805 21:43:19.275220 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.374942 kubelet[2837]: W0805 21:43:19.374811 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.374942 kubelet[2837]: E0805 21:43:19.374849 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.431378 kubelet[2837]: E0805 21:43:19.431342 2837 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.35:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4012.1.0-a-183bdb833d.17e8f3252ca06660 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4012.1.0-a-183bdb833d,UID:ci-4012.1.0-a-183bdb833d,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4012.1.0-a-183bdb833d,},FirstTimestamp:2024-08-05 21:43:18.457263712 +0000 UTC m=+0.997262023,LastTimestamp:2024-08-05 21:43:18.457263712 +0000 UTC m=+0.997262023,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4012.1.0-a-183bdb833d,}" Aug 5 21:43:19.679335 kubelet[2837]: W0805 21:43:19.679281 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-183bdb833d&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.679335 kubelet[2837]: E0805 21:43:19.679340 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-183bdb833d&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.875226 kubelet[2837]: E0805 21:43:19.875198 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-183bdb833d?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="1.6s" Aug 5 21:43:19.928704 kubelet[2837]: W0805 21:43:19.928674 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.928704 kubelet[2837]: E0805 21:43:19.928709 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:19.979612 kubelet[2837]: I0805 21:43:19.979525 2837 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:19.979980 kubelet[2837]: E0805 21:43:19.979824 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:20.001117 kubelet[2837]: E0805 21:43:20.001074 2837 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 21:43:20.873106 kubelet[2837]: E0805 21:43:20.567390 2837 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:20.914987 kubelet[2837]: I0805 21:43:20.914893 2837 policy_none.go:49] "None policy: Start" Aug 5 21:43:20.915644 kubelet[2837]: I0805 21:43:20.915621 2837 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 21:43:20.915738 kubelet[2837]: I0805 21:43:20.915663 2837 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:43:20.984832 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 21:43:20.995914 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 21:43:21.000275 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 21:43:21.008915 kubelet[2837]: I0805 21:43:21.008872 2837 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:43:21.009175 kubelet[2837]: I0805 21:43:21.009146 2837 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:43:21.011468 kubelet[2837]: E0805 21:43:21.011432 2837 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.1.0-a-183bdb833d\" not found" Aug 5 21:43:21.476384 kubelet[2837]: E0805 21:43:21.476348 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-183bdb833d?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="3.2s" Aug 5 21:43:21.484804 kubelet[2837]: W0805 21:43:21.484753 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:21.484804 kubelet[2837]: E0805 21:43:21.484787 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.35:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:21.581561 kubelet[2837]: I0805 21:43:21.581526 2837 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.581883 kubelet[2837]: E0805 21:43:21.581845 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.602249 kubelet[2837]: I0805 21:43:21.602228 2837 topology_manager.go:215] "Topology Admit Handler" podUID="87fa20ac2285d6242938337fb751bde2" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.604027 kubelet[2837]: I0805 21:43:21.603805 2837 topology_manager.go:215] "Topology Admit Handler" podUID="1b8225ad3cdb2c0353956e70ad86ca72" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.605600 kubelet[2837]: I0805 21:43:21.605573 2837 topology_manager.go:215] "Topology Admit Handler" podUID="49a740d55d9ed830f755a27aa7013033" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.612290 systemd[1]: Created slice kubepods-burstable-pod87fa20ac2285d6242938337fb751bde2.slice - libcontainer container kubepods-burstable-pod87fa20ac2285d6242938337fb751bde2.slice. Aug 5 21:43:21.625921 systemd[1]: Created slice kubepods-burstable-pod1b8225ad3cdb2c0353956e70ad86ca72.slice - libcontainer container kubepods-burstable-pod1b8225ad3cdb2c0353956e70ad86ca72.slice. Aug 5 21:43:21.630694 systemd[1]: Created slice kubepods-burstable-pod49a740d55d9ed830f755a27aa7013033.slice - libcontainer container kubepods-burstable-pod49a740d55d9ed830f755a27aa7013033.slice. Aug 5 21:43:21.684892 kubelet[2837]: I0805 21:43:21.684862 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87fa20ac2285d6242938337fb751bde2-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" (UID: \"87fa20ac2285d6242938337fb751bde2\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685181 kubelet[2837]: I0805 21:43:21.685059 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87fa20ac2285d6242938337fb751bde2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" (UID: \"87fa20ac2285d6242938337fb751bde2\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685181 kubelet[2837]: I0805 21:43:21.685104 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685181 kubelet[2837]: I0805 21:43:21.685129 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49a740d55d9ed830f755a27aa7013033-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-a-183bdb833d\" (UID: \"49a740d55d9ed830f755a27aa7013033\") " pod="kube-system/kube-scheduler-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685458 kubelet[2837]: I0805 21:43:21.685314 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87fa20ac2285d6242938337fb751bde2-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" (UID: \"87fa20ac2285d6242938337fb751bde2\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685458 kubelet[2837]: I0805 21:43:21.685357 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685458 kubelet[2837]: I0805 21:43:21.685394 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685458 kubelet[2837]: I0805 21:43:21.685423 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.685605 kubelet[2837]: I0805 21:43:21.685585 2837 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:21.905243 kubelet[2837]: W0805 21:43:21.905149 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:21.905243 kubelet[2837]: E0805 21:43:21.905217 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:21.924946 containerd[1701]: time="2024-08-05T21:43:21.924901244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-a-183bdb833d,Uid:87fa20ac2285d6242938337fb751bde2,Namespace:kube-system,Attempt:0,}" Aug 5 21:43:21.929702 containerd[1701]: time="2024-08-05T21:43:21.929653974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-a-183bdb833d,Uid:1b8225ad3cdb2c0353956e70ad86ca72,Namespace:kube-system,Attempt:0,}" Aug 5 21:43:21.933896 containerd[1701]: time="2024-08-05T21:43:21.933325901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-a-183bdb833d,Uid:49a740d55d9ed830f755a27aa7013033,Namespace:kube-system,Attempt:0,}" Aug 5 21:43:22.665646 kubelet[2837]: W0805 21:43:22.665598 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:22.665646 kubelet[2837]: E0805 21:43:22.665648 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:22.739504 kubelet[2837]: W0805 21:43:22.739463 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-183bdb833d&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:22.739504 kubelet[2837]: E0805 21:43:22.739504 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-183bdb833d&limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:24.676947 kubelet[2837]: E0805 21:43:24.676906 2837 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:24.677344 kubelet[2837]: E0805 21:43:24.676996 2837 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-183bdb833d?timeout=10s\": dial tcp 10.200.20.35:6443: connect: connection refused" interval="6.4s" Aug 5 21:43:24.784417 kubelet[2837]: I0805 21:43:24.784379 2837 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:24.785094 kubelet[2837]: E0805 21:43:24.784737 2837 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.35:6443/api/v1/nodes\": dial tcp 10.200.20.35:6443: connect: connection refused" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:24.792212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3031466223.mount: Deactivated successfully. Aug 5 21:43:24.829211 containerd[1701]: time="2024-08-05T21:43:24.828693348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:43:24.831527 containerd[1701]: time="2024-08-05T21:43:24.831481553Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Aug 5 21:43:24.834880 containerd[1701]: time="2024-08-05T21:43:24.834834760Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:43:24.840192 containerd[1701]: time="2024-08-05T21:43:24.838008366Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:43:24.846308 containerd[1701]: time="2024-08-05T21:43:24.846252183Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:43:24.848032 containerd[1701]: time="2024-08-05T21:43:24.847981267Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:43:24.850453 containerd[1701]: time="2024-08-05T21:43:24.850405391Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 21:43:24.854197 containerd[1701]: time="2024-08-05T21:43:24.854125279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 21:43:24.854950 containerd[1701]: time="2024-08-05T21:43:24.854907840Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.925042306s" Aug 5 21:43:24.858196 containerd[1701]: time="2024-08-05T21:43:24.857471286Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.924054145s" Aug 5 21:43:24.858196 containerd[1701]: time="2024-08-05T21:43:24.858179207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 2.933160162s" Aug 5 21:43:25.347244 kubelet[2837]: W0805 21:43:25.347153 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:25.347244 kubelet[2837]: E0805 21:43:25.347223 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:25.581833 containerd[1701]: time="2024-08-05T21:43:25.581530825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:43:25.581833 containerd[1701]: time="2024-08-05T21:43:25.581591585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:25.581833 containerd[1701]: time="2024-08-05T21:43:25.581605825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:43:25.581833 containerd[1701]: time="2024-08-05T21:43:25.581616065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589079440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589124761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589146841Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589180481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589007360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589101961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589122281Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:43:25.589380 containerd[1701]: time="2024-08-05T21:43:25.589136761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:25.613389 systemd[1]: Started cri-containerd-3512a91714f66eae06ceaa0a4a3cf618fbdec2e54ea66bcee66dcbfdf4a5452f.scope - libcontainer container 3512a91714f66eae06ceaa0a4a3cf618fbdec2e54ea66bcee66dcbfdf4a5452f. Aug 5 21:43:25.614613 systemd[1]: Started cri-containerd-92d7d7f5df34be60bde2103a7f5dd325e4de0647d62625e303d5104615bffaec.scope - libcontainer container 92d7d7f5df34be60bde2103a7f5dd325e4de0647d62625e303d5104615bffaec. Aug 5 21:43:25.620976 systemd[1]: Started cri-containerd-16ad1a3e3036960b808ab1ae4c618c8d0a1bba7c4bae4045ead0b8744d640fe2.scope - libcontainer container 16ad1a3e3036960b808ab1ae4c618c8d0a1bba7c4bae4045ead0b8744d640fe2. Aug 5 21:43:25.672923 containerd[1701]: time="2024-08-05T21:43:25.672375408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-a-183bdb833d,Uid:87fa20ac2285d6242938337fb751bde2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3512a91714f66eae06ceaa0a4a3cf618fbdec2e54ea66bcee66dcbfdf4a5452f\"" Aug 5 21:43:25.679326 containerd[1701]: time="2024-08-05T21:43:25.678664261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-a-183bdb833d,Uid:1b8225ad3cdb2c0353956e70ad86ca72,Namespace:kube-system,Attempt:0,} returns sandbox id \"92d7d7f5df34be60bde2103a7f5dd325e4de0647d62625e303d5104615bffaec\"" Aug 5 21:43:25.681794 containerd[1701]: time="2024-08-05T21:43:25.681614627Z" level=info msg="CreateContainer within sandbox \"3512a91714f66eae06ceaa0a4a3cf618fbdec2e54ea66bcee66dcbfdf4a5452f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 21:43:25.683808 containerd[1701]: time="2024-08-05T21:43:25.683394551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-a-183bdb833d,Uid:49a740d55d9ed830f755a27aa7013033,Namespace:kube-system,Attempt:0,} returns sandbox id \"16ad1a3e3036960b808ab1ae4c618c8d0a1bba7c4bae4045ead0b8744d640fe2\"" Aug 5 21:43:25.685936 containerd[1701]: time="2024-08-05T21:43:25.685693635Z" level=info msg="CreateContainer within sandbox \"92d7d7f5df34be60bde2103a7f5dd325e4de0647d62625e303d5104615bffaec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 21:43:25.687710 containerd[1701]: time="2024-08-05T21:43:25.687666079Z" level=info msg="CreateContainer within sandbox \"16ad1a3e3036960b808ab1ae4c618c8d0a1bba7c4bae4045ead0b8744d640fe2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 21:43:25.749516 containerd[1701]: time="2024-08-05T21:43:25.749293803Z" level=info msg="CreateContainer within sandbox \"3512a91714f66eae06ceaa0a4a3cf618fbdec2e54ea66bcee66dcbfdf4a5452f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b06e9bc1ad53c6ab8342865fef334aa35765e86cfd39a060225eac0361ad0015\"" Aug 5 21:43:25.750257 containerd[1701]: time="2024-08-05T21:43:25.750227205Z" level=info msg="StartContainer for \"b06e9bc1ad53c6ab8342865fef334aa35765e86cfd39a060225eac0361ad0015\"" Aug 5 21:43:25.754178 containerd[1701]: time="2024-08-05T21:43:25.753513852Z" level=info msg="CreateContainer within sandbox \"16ad1a3e3036960b808ab1ae4c618c8d0a1bba7c4bae4045ead0b8744d640fe2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7c58a8729e78401613a21d9ff4141b1015921efd15b648c31a3381bfd449921a\"" Aug 5 21:43:25.754395 containerd[1701]: time="2024-08-05T21:43:25.754363974Z" level=info msg="StartContainer for \"7c58a8729e78401613a21d9ff4141b1015921efd15b648c31a3381bfd449921a\"" Aug 5 21:43:25.756318 containerd[1701]: time="2024-08-05T21:43:25.756267697Z" level=info msg="CreateContainer within sandbox \"92d7d7f5df34be60bde2103a7f5dd325e4de0647d62625e303d5104615bffaec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df97ff7b570d661b64072f6f0268897b9780ac74518dbd8691689849f6a00ea3\"" Aug 5 21:43:25.756847 containerd[1701]: time="2024-08-05T21:43:25.756814899Z" level=info msg="StartContainer for \"df97ff7b570d661b64072f6f0268897b9780ac74518dbd8691689849f6a00ea3\"" Aug 5 21:43:25.789401 systemd[1]: Started cri-containerd-df97ff7b570d661b64072f6f0268897b9780ac74518dbd8691689849f6a00ea3.scope - libcontainer container df97ff7b570d661b64072f6f0268897b9780ac74518dbd8691689849f6a00ea3. Aug 5 21:43:25.816362 systemd[1]: Started cri-containerd-7c58a8729e78401613a21d9ff4141b1015921efd15b648c31a3381bfd449921a.scope - libcontainer container 7c58a8729e78401613a21d9ff4141b1015921efd15b648c31a3381bfd449921a. Aug 5 21:43:25.819229 systemd[1]: Started cri-containerd-b06e9bc1ad53c6ab8342865fef334aa35765e86cfd39a060225eac0361ad0015.scope - libcontainer container b06e9bc1ad53c6ab8342865fef334aa35765e86cfd39a060225eac0361ad0015. Aug 5 21:43:25.861551 containerd[1701]: time="2024-08-05T21:43:25.861023469Z" level=info msg="StartContainer for \"df97ff7b570d661b64072f6f0268897b9780ac74518dbd8691689849f6a00ea3\" returns successfully" Aug 5 21:43:25.891568 containerd[1701]: time="2024-08-05T21:43:25.890735729Z" level=info msg="StartContainer for \"7c58a8729e78401613a21d9ff4141b1015921efd15b648c31a3381bfd449921a\" returns successfully" Aug 5 21:43:25.898883 containerd[1701]: time="2024-08-05T21:43:25.898803665Z" level=info msg="StartContainer for \"b06e9bc1ad53c6ab8342865fef334aa35765e86cfd39a060225eac0361ad0015\" returns successfully" Aug 5 21:43:25.899799 kubelet[2837]: W0805 21:43:25.899744 2837 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:25.899799 kubelet[2837]: E0805 21:43:25.899803 2837 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.35:6443: connect: connection refused Aug 5 21:43:29.465823 kubelet[2837]: I0805 21:43:29.465780 2837 apiserver.go:52] "Watching apiserver" Aug 5 21:43:29.470280 kubelet[2837]: I0805 21:43:29.470221 2837 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:43:29.489585 kubelet[2837]: E0805 21:43:29.489523 2837 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4012.1.0-a-183bdb833d" not found Aug 5 21:43:30.040257 kubelet[2837]: E0805 21:43:30.040217 2837 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4012.1.0-a-183bdb833d" not found Aug 5 21:43:30.712001 kubelet[2837]: E0805 21:43:30.711943 2837 csi_plugin.go:300] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4012.1.0-a-183bdb833d" not found Aug 5 21:43:31.011708 kubelet[2837]: E0805 21:43:31.011588 2837 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.1.0-a-183bdb833d\" not found" Aug 5 21:43:31.081588 kubelet[2837]: E0805 21:43:31.081536 2837 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.1.0-a-183bdb833d\" not found" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:31.187780 kubelet[2837]: I0805 21:43:31.187696 2837 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:31.191985 kubelet[2837]: I0805 21:43:31.191909 2837 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:31.442495 kubelet[2837]: W0805 21:43:31.442454 2837 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 21:43:31.554216 kubelet[2837]: W0805 21:43:31.553986 2837 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 21:43:32.584530 systemd[1]: Reloading requested from client PID 3110 ('systemctl') (unit session-9.scope)... Aug 5 21:43:32.584552 systemd[1]: Reloading... Aug 5 21:43:32.700202 zram_generator::config[3147]: No configuration found. Aug 5 21:43:32.844006 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 21:43:32.953765 systemd[1]: Reloading finished in 368 ms. Aug 5 21:43:33.001300 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:43:33.008006 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 21:43:33.008461 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:43:33.008670 systemd[1]: kubelet.service: Consumed 1.381s CPU time, 114.4M memory peak, 0B memory swap peak. Aug 5 21:43:33.016519 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 21:43:33.132247 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 21:43:33.146625 (kubelet)[3211]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 21:43:33.212634 kubelet[3211]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:43:33.212634 kubelet[3211]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 21:43:33.212634 kubelet[3211]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 21:43:33.213130 kubelet[3211]: I0805 21:43:33.212724 3211 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 21:43:33.219106 kubelet[3211]: I0805 21:43:33.219036 3211 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Aug 5 21:43:33.219106 kubelet[3211]: I0805 21:43:33.219084 3211 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 21:43:33.221020 kubelet[3211]: I0805 21:43:33.220325 3211 server.go:919] "Client rotation is on, will bootstrap in background" Aug 5 21:43:33.222728 kubelet[3211]: I0805 21:43:33.222709 3211 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 21:43:33.225940 kubelet[3211]: I0805 21:43:33.225899 3211 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 21:43:33.237464 kubelet[3211]: I0805 21:43:33.237191 3211 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 21:43:33.237907 kubelet[3211]: I0805 21:43:33.237874 3211 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 21:43:33.238221 kubelet[3211]: I0805 21:43:33.238199 3211 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 21:43:33.238423 kubelet[3211]: I0805 21:43:33.238394 3211 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 21:43:33.238503 kubelet[3211]: I0805 21:43:33.238495 3211 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 21:43:33.238602 kubelet[3211]: I0805 21:43:33.238576 3211 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:43:33.238818 kubelet[3211]: I0805 21:43:33.238809 3211 kubelet.go:396] "Attempting to sync node with API server" Aug 5 21:43:33.239500 kubelet[3211]: I0805 21:43:33.239435 3211 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 21:43:33.239568 kubelet[3211]: I0805 21:43:33.239514 3211 kubelet.go:312] "Adding apiserver pod source" Aug 5 21:43:33.239568 kubelet[3211]: I0805 21:43:33.239531 3211 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 21:43:33.241809 kubelet[3211]: I0805 21:43:33.241739 3211 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 21:43:33.242128 kubelet[3211]: I0805 21:43:33.242105 3211 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 5 21:43:33.242761 kubelet[3211]: I0805 21:43:33.242735 3211 server.go:1256] "Started kubelet" Aug 5 21:43:33.247175 kubelet[3211]: I0805 21:43:33.245508 3211 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 21:43:33.253817 kubelet[3211]: I0805 21:43:33.253792 3211 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 21:43:33.255000 kubelet[3211]: I0805 21:43:33.254957 3211 server.go:461] "Adding debug handlers to kubelet server" Aug 5 21:43:33.256274 kubelet[3211]: I0805 21:43:33.256234 3211 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 5 21:43:33.256620 kubelet[3211]: I0805 21:43:33.256606 3211 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 21:43:33.258243 kubelet[3211]: I0805 21:43:33.258197 3211 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 21:43:33.271174 kubelet[3211]: I0805 21:43:33.267491 3211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 21:43:33.271174 kubelet[3211]: I0805 21:43:33.268808 3211 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 21:43:33.271174 kubelet[3211]: I0805 21:43:33.268840 3211 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 21:43:33.271174 kubelet[3211]: I0805 21:43:33.268892 3211 kubelet.go:2329] "Starting kubelet main sync loop" Aug 5 21:43:33.271174 kubelet[3211]: E0805 21:43:33.268947 3211 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 21:43:33.271548 kubelet[3211]: I0805 21:43:33.271531 3211 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 21:43:33.271861 kubelet[3211]: I0805 21:43:33.271849 3211 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 21:43:33.288533 kubelet[3211]: E0805 21:43:33.288508 3211 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 21:43:33.290640 kubelet[3211]: I0805 21:43:33.290617 3211 factory.go:221] Registration of the containerd container factory successfully Aug 5 21:43:33.290813 kubelet[3211]: I0805 21:43:33.290779 3211 factory.go:221] Registration of the systemd container factory successfully Aug 5 21:43:33.291035 kubelet[3211]: I0805 21:43:33.291001 3211 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 5 21:43:33.352141 kubelet[3211]: I0805 21:43:33.352117 3211 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 21:43:33.352417 kubelet[3211]: I0805 21:43:33.352406 3211 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 21:43:33.352518 kubelet[3211]: I0805 21:43:33.352507 3211 state_mem.go:36] "Initialized new in-memory state store" Aug 5 21:43:33.352750 kubelet[3211]: I0805 21:43:33.352739 3211 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 21:43:33.352862 kubelet[3211]: I0805 21:43:33.352853 3211 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 21:43:33.352927 kubelet[3211]: I0805 21:43:33.352913 3211 policy_none.go:49] "None policy: Start" Aug 5 21:43:33.354120 kubelet[3211]: I0805 21:43:33.354101 3211 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 5 21:43:33.354416 kubelet[3211]: I0805 21:43:33.354406 3211 state_mem.go:35] "Initializing new in-memory state store" Aug 5 21:43:33.354745 kubelet[3211]: I0805 21:43:33.354731 3211 state_mem.go:75] "Updated machine memory state" Aug 5 21:43:33.360273 kubelet[3211]: I0805 21:43:33.360254 3211 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 21:43:33.360707 kubelet[3211]: I0805 21:43:33.360694 3211 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 21:43:33.370048 kubelet[3211]: I0805 21:43:33.369863 3211 topology_manager.go:215] "Topology Admit Handler" podUID="87fa20ac2285d6242938337fb751bde2" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.370311 kubelet[3211]: I0805 21:43:33.370266 3211 topology_manager.go:215] "Topology Admit Handler" podUID="1b8225ad3cdb2c0353956e70ad86ca72" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.370491 kubelet[3211]: I0805 21:43:33.370472 3211 topology_manager.go:215] "Topology Admit Handler" podUID="49a740d55d9ed830f755a27aa7013033" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.372343 kubelet[3211]: I0805 21:43:33.372082 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/49a740d55d9ed830f755a27aa7013033-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-a-183bdb833d\" (UID: \"49a740d55d9ed830f755a27aa7013033\") " pod="kube-system/kube-scheduler-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.372934 kubelet[3211]: I0805 21:43:33.372915 3211 kubelet_node_status.go:73] "Attempting to register node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.378495 kubelet[3211]: W0805 21:43:33.378475 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 21:43:33.387132 kubelet[3211]: W0805 21:43:33.386991 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 21:43:33.387132 kubelet[3211]: E0805 21:43:33.387079 3211 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" already exists" pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.389299 kubelet[3211]: W0805 21:43:33.388218 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 21:43:33.389299 kubelet[3211]: E0805 21:43:33.388267 3211 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" already exists" pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.389299 kubelet[3211]: I0805 21:43:33.388417 3211 kubelet_node_status.go:112] "Node was previously registered" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.389299 kubelet[3211]: I0805 21:43:33.388506 3211 kubelet_node_status.go:76] "Successfully registered node" node="ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473608 kubelet[3211]: I0805 21:43:33.473566 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473833 kubelet[3211]: I0805 21:43:33.473626 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473833 kubelet[3211]: I0805 21:43:33.473715 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/87fa20ac2285d6242938337fb751bde2-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" (UID: \"87fa20ac2285d6242938337fb751bde2\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473833 kubelet[3211]: I0805 21:43:33.473735 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473833 kubelet[3211]: I0805 21:43:33.473754 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473833 kubelet[3211]: I0805 21:43:33.473773 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/87fa20ac2285d6242938337fb751bde2-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" (UID: \"87fa20ac2285d6242938337fb751bde2\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473989 kubelet[3211]: I0805 21:43:33.473794 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/87fa20ac2285d6242938337fb751bde2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" (UID: \"87fa20ac2285d6242938337fb751bde2\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:33.473989 kubelet[3211]: I0805 21:43:33.473814 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b8225ad3cdb2c0353956e70ad86ca72-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-a-183bdb833d\" (UID: \"1b8225ad3cdb2c0353956e70ad86ca72\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:34.241745 kubelet[3211]: I0805 21:43:34.240515 3211 apiserver.go:52] "Watching apiserver" Aug 5 21:43:34.272462 kubelet[3211]: I0805 21:43:34.272387 3211 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 21:43:34.332676 kubelet[3211]: W0805 21:43:34.332568 3211 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 21:43:34.332676 kubelet[3211]: E0805 21:43:34.332659 3211 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.1.0-a-183bdb833d\" already exists" pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" Aug 5 21:43:34.354340 kubelet[3211]: I0805 21:43:34.354284 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.1.0-a-183bdb833d" podStartSLOduration=3.354235956 podStartE2EDuration="3.354235956s" podCreationTimestamp="2024-08-05 21:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:43:34.347076462 +0000 UTC m=+1.192068184" watchObservedRunningTime="2024-08-05 21:43:34.354235956 +0000 UTC m=+1.199227678" Aug 5 21:43:34.363186 kubelet[3211]: I0805 21:43:34.362759 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.1.0-a-183bdb833d" podStartSLOduration=1.362717532 podStartE2EDuration="1.362717532s" podCreationTimestamp="2024-08-05 21:43:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:43:34.354398996 +0000 UTC m=+1.199390718" watchObservedRunningTime="2024-08-05 21:43:34.362717532 +0000 UTC m=+1.207709254" Aug 5 21:43:34.372541 kubelet[3211]: I0805 21:43:34.372446 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.1.0-a-183bdb833d" podStartSLOduration=3.37240631 podStartE2EDuration="3.37240631s" podCreationTimestamp="2024-08-05 21:43:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:43:34.362956092 +0000 UTC m=+1.207947814" watchObservedRunningTime="2024-08-05 21:43:34.37240631 +0000 UTC m=+1.217397992" Aug 5 21:43:41.152953 sudo[2199]: pam_unix(sudo:session): session closed for user root Aug 5 21:43:41.226015 sshd[2196]: pam_unix(sshd:session): session closed for user core Aug 5 21:43:41.230698 systemd[1]: sshd@6-10.200.20.35:22-10.200.16.10:58786.service: Deactivated successfully. Aug 5 21:43:41.232383 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 21:43:41.232553 systemd[1]: session-9.scope: Consumed 6.611s CPU time, 132.0M memory peak, 0B memory swap peak. Aug 5 21:43:41.233085 systemd-logind[1666]: Session 9 logged out. Waiting for processes to exit. Aug 5 21:43:41.234688 systemd-logind[1666]: Removed session 9. Aug 5 21:43:46.194871 kubelet[3211]: I0805 21:43:46.194827 3211 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 21:43:46.195737 containerd[1701]: time="2024-08-05T21:43:46.195252605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 21:43:46.196346 kubelet[3211]: I0805 21:43:46.196076 3211 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 21:43:46.949623 kubelet[3211]: I0805 21:43:46.948213 3211 topology_manager.go:215] "Topology Admit Handler" podUID="d2e0a972-29d2-43ac-8106-7f4efc6b44f2" podNamespace="kube-system" podName="kube-proxy-67b5q" Aug 5 21:43:46.957875 kubelet[3211]: I0805 21:43:46.956857 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2h59\" (UniqueName: \"kubernetes.io/projected/d2e0a972-29d2-43ac-8106-7f4efc6b44f2-kube-api-access-b2h59\") pod \"kube-proxy-67b5q\" (UID: \"d2e0a972-29d2-43ac-8106-7f4efc6b44f2\") " pod="kube-system/kube-proxy-67b5q" Aug 5 21:43:46.957875 kubelet[3211]: I0805 21:43:46.956940 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2e0a972-29d2-43ac-8106-7f4efc6b44f2-kube-proxy\") pod \"kube-proxy-67b5q\" (UID: \"d2e0a972-29d2-43ac-8106-7f4efc6b44f2\") " pod="kube-system/kube-proxy-67b5q" Aug 5 21:43:46.957875 kubelet[3211]: I0805 21:43:46.956964 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e0a972-29d2-43ac-8106-7f4efc6b44f2-xtables-lock\") pod \"kube-proxy-67b5q\" (UID: \"d2e0a972-29d2-43ac-8106-7f4efc6b44f2\") " pod="kube-system/kube-proxy-67b5q" Aug 5 21:43:46.957875 kubelet[3211]: I0805 21:43:46.956988 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e0a972-29d2-43ac-8106-7f4efc6b44f2-lib-modules\") pod \"kube-proxy-67b5q\" (UID: \"d2e0a972-29d2-43ac-8106-7f4efc6b44f2\") " pod="kube-system/kube-proxy-67b5q" Aug 5 21:43:46.962948 systemd[1]: Created slice kubepods-besteffort-podd2e0a972_29d2_43ac_8106_7f4efc6b44f2.slice - libcontainer container kubepods-besteffort-podd2e0a972_29d2_43ac_8106_7f4efc6b44f2.slice. Aug 5 21:43:47.136929 kubelet[3211]: I0805 21:43:47.136882 3211 topology_manager.go:215] "Topology Admit Handler" podUID="038b0e55-17ca-4e3c-b1b4-7f8ced560e95" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-2j8bv" Aug 5 21:43:47.150999 systemd[1]: Created slice kubepods-besteffort-pod038b0e55_17ca_4e3c_b1b4_7f8ced560e95.slice - libcontainer container kubepods-besteffort-pod038b0e55_17ca_4e3c_b1b4_7f8ced560e95.slice. Aug 5 21:43:47.157913 kubelet[3211]: I0805 21:43:47.157804 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvdwf\" (UniqueName: \"kubernetes.io/projected/038b0e55-17ca-4e3c-b1b4-7f8ced560e95-kube-api-access-pvdwf\") pod \"tigera-operator-76c4974c85-2j8bv\" (UID: \"038b0e55-17ca-4e3c-b1b4-7f8ced560e95\") " pod="tigera-operator/tigera-operator-76c4974c85-2j8bv" Aug 5 21:43:47.157913 kubelet[3211]: I0805 21:43:47.157880 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/038b0e55-17ca-4e3c-b1b4-7f8ced560e95-var-lib-calico\") pod \"tigera-operator-76c4974c85-2j8bv\" (UID: \"038b0e55-17ca-4e3c-b1b4-7f8ced560e95\") " pod="tigera-operator/tigera-operator-76c4974c85-2j8bv" Aug 5 21:43:47.274482 containerd[1701]: time="2024-08-05T21:43:47.273775885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67b5q,Uid:d2e0a972-29d2-43ac-8106-7f4efc6b44f2,Namespace:kube-system,Attempt:0,}" Aug 5 21:43:47.459093 containerd[1701]: time="2024-08-05T21:43:47.459037975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-2j8bv,Uid:038b0e55-17ca-4e3c-b1b4-7f8ced560e95,Namespace:tigera-operator,Attempt:0,}" Aug 5 21:43:47.592811 containerd[1701]: time="2024-08-05T21:43:47.592508613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:43:47.593404 containerd[1701]: time="2024-08-05T21:43:47.592586293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:47.593404 containerd[1701]: time="2024-08-05T21:43:47.593338894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:43:47.593404 containerd[1701]: time="2024-08-05T21:43:47.593375294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:47.615585 systemd[1]: Started cri-containerd-fdd819d330e2ff333eb474e2ac8487c490d27348248ffb32b64db126c59e284a.scope - libcontainer container fdd819d330e2ff333eb474e2ac8487c490d27348248ffb32b64db126c59e284a. Aug 5 21:43:47.637992 containerd[1701]: time="2024-08-05T21:43:47.637916214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-67b5q,Uid:d2e0a972-29d2-43ac-8106-7f4efc6b44f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"fdd819d330e2ff333eb474e2ac8487c490d27348248ffb32b64db126c59e284a\"" Aug 5 21:43:47.642059 containerd[1701]: time="2024-08-05T21:43:47.641818541Z" level=info msg="CreateContainer within sandbox \"fdd819d330e2ff333eb474e2ac8487c490d27348248ffb32b64db126c59e284a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 21:43:47.942889 containerd[1701]: time="2024-08-05T21:43:47.942541636Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:43:47.942889 containerd[1701]: time="2024-08-05T21:43:47.942631676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:47.942889 containerd[1701]: time="2024-08-05T21:43:47.942655916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:43:47.942889 containerd[1701]: time="2024-08-05T21:43:47.942670156Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:47.968583 systemd[1]: Started cri-containerd-8e6004bf980862cfd861f3007fbf0c34e0cfc5117ca869cef3f843ddf6ecd905.scope - libcontainer container 8e6004bf980862cfd861f3007fbf0c34e0cfc5117ca869cef3f843ddf6ecd905. Aug 5 21:43:48.003318 containerd[1701]: time="2024-08-05T21:43:48.002945264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-2j8bv,Uid:038b0e55-17ca-4e3c-b1b4-7f8ced560e95,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"8e6004bf980862cfd861f3007fbf0c34e0cfc5117ca869cef3f843ddf6ecd905\"" Aug 5 21:43:48.007204 containerd[1701]: time="2024-08-05T21:43:48.006953871Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 21:43:48.224818 containerd[1701]: time="2024-08-05T21:43:48.224452538Z" level=info msg="CreateContainer within sandbox \"fdd819d330e2ff333eb474e2ac8487c490d27348248ffb32b64db126c59e284a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab12ffc962f4675c779cfb7372cd3ff23c4335cbde2845b3604d886f5d342eda\"" Aug 5 21:43:48.226891 containerd[1701]: time="2024-08-05T21:43:48.225368980Z" level=info msg="StartContainer for \"ab12ffc962f4675c779cfb7372cd3ff23c4335cbde2845b3604d886f5d342eda\"" Aug 5 21:43:48.261531 systemd[1]: Started cri-containerd-ab12ffc962f4675c779cfb7372cd3ff23c4335cbde2845b3604d886f5d342eda.scope - libcontainer container ab12ffc962f4675c779cfb7372cd3ff23c4335cbde2845b3604d886f5d342eda. Aug 5 21:43:48.294375 containerd[1701]: time="2024-08-05T21:43:48.294324422Z" level=info msg="StartContainer for \"ab12ffc962f4675c779cfb7372cd3ff23c4335cbde2845b3604d886f5d342eda\" returns successfully" Aug 5 21:43:50.838017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343262682.mount: Deactivated successfully. Aug 5 21:43:51.219471 containerd[1701]: time="2024-08-05T21:43:51.219411310Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:51.222185 containerd[1701]: time="2024-08-05T21:43:51.222124995Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473638" Aug 5 21:43:51.227064 containerd[1701]: time="2024-08-05T21:43:51.227002404Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:51.232106 containerd[1701]: time="2024-08-05T21:43:51.232016933Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:51.233287 containerd[1701]: time="2024-08-05T21:43:51.233114655Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 3.226092024s" Aug 5 21:43:51.233287 containerd[1701]: time="2024-08-05T21:43:51.233182295Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Aug 5 21:43:51.237111 containerd[1701]: time="2024-08-05T21:43:51.237027822Z" level=info msg="CreateContainer within sandbox \"8e6004bf980862cfd861f3007fbf0c34e0cfc5117ca869cef3f843ddf6ecd905\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 21:43:51.259283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927417467.mount: Deactivated successfully. Aug 5 21:43:51.267886 containerd[1701]: time="2024-08-05T21:43:51.267832516Z" level=info msg="CreateContainer within sandbox \"8e6004bf980862cfd861f3007fbf0c34e0cfc5117ca869cef3f843ddf6ecd905\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1dca9532de7e2f895e9c64554f71c4d3c59f3360b788d473dc7c99a8bcb9b1c6\"" Aug 5 21:43:51.268299 containerd[1701]: time="2024-08-05T21:43:51.268270437Z" level=info msg="StartContainer for \"1dca9532de7e2f895e9c64554f71c4d3c59f3360b788d473dc7c99a8bcb9b1c6\"" Aug 5 21:43:51.298376 systemd[1]: Started cri-containerd-1dca9532de7e2f895e9c64554f71c4d3c59f3360b788d473dc7c99a8bcb9b1c6.scope - libcontainer container 1dca9532de7e2f895e9c64554f71c4d3c59f3360b788d473dc7c99a8bcb9b1c6. Aug 5 21:43:51.324139 containerd[1701]: time="2024-08-05T21:43:51.324086537Z" level=info msg="StartContainer for \"1dca9532de7e2f895e9c64554f71c4d3c59f3360b788d473dc7c99a8bcb9b1c6\" returns successfully" Aug 5 21:43:51.367961 kubelet[3211]: I0805 21:43:51.367798 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-67b5q" podStartSLOduration=5.367743854 podStartE2EDuration="5.367743854s" podCreationTimestamp="2024-08-05 21:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:43:48.363670506 +0000 UTC m=+15.208662188" watchObservedRunningTime="2024-08-05 21:43:51.367743854 +0000 UTC m=+18.212735536" Aug 5 21:43:53.280330 kubelet[3211]: I0805 21:43:53.280011 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-2j8bv" podStartSLOduration=3.05141391 podStartE2EDuration="6.279971419s" podCreationTimestamp="2024-08-05 21:43:47 +0000 UTC" firstStartedPulling="2024-08-05 21:43:48.005089667 +0000 UTC m=+14.850081389" lastFinishedPulling="2024-08-05 21:43:51.233647176 +0000 UTC m=+18.078638898" observedRunningTime="2024-08-05 21:43:51.368951696 +0000 UTC m=+18.213943418" watchObservedRunningTime="2024-08-05 21:43:53.279971419 +0000 UTC m=+20.124963141" Aug 5 21:43:55.574529 kubelet[3211]: I0805 21:43:55.574418 3211 topology_manager.go:215] "Topology Admit Handler" podUID="255eadc9-3b6d-4957-bb38-f9d060edc1cf" podNamespace="calico-system" podName="calico-typha-7df87f7c69-kxj2t" Aug 5 21:43:55.585046 systemd[1]: Created slice kubepods-besteffort-pod255eadc9_3b6d_4957_bb38_f9d060edc1cf.slice - libcontainer container kubepods-besteffort-pod255eadc9_3b6d_4957_bb38_f9d060edc1cf.slice. Aug 5 21:43:55.618215 kubelet[3211]: I0805 21:43:55.617433 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmks\" (UniqueName: \"kubernetes.io/projected/255eadc9-3b6d-4957-bb38-f9d060edc1cf-kube-api-access-5bmks\") pod \"calico-typha-7df87f7c69-kxj2t\" (UID: \"255eadc9-3b6d-4957-bb38-f9d060edc1cf\") " pod="calico-system/calico-typha-7df87f7c69-kxj2t" Aug 5 21:43:55.618215 kubelet[3211]: I0805 21:43:55.617488 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/255eadc9-3b6d-4957-bb38-f9d060edc1cf-tigera-ca-bundle\") pod \"calico-typha-7df87f7c69-kxj2t\" (UID: \"255eadc9-3b6d-4957-bb38-f9d060edc1cf\") " pod="calico-system/calico-typha-7df87f7c69-kxj2t" Aug 5 21:43:55.618215 kubelet[3211]: I0805 21:43:55.617517 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/255eadc9-3b6d-4957-bb38-f9d060edc1cf-typha-certs\") pod \"calico-typha-7df87f7c69-kxj2t\" (UID: \"255eadc9-3b6d-4957-bb38-f9d060edc1cf\") " pod="calico-system/calico-typha-7df87f7c69-kxj2t" Aug 5 21:43:55.681902 kubelet[3211]: I0805 21:43:55.681854 3211 topology_manager.go:215] "Topology Admit Handler" podUID="f3ce555a-f6d5-406d-8094-29f35ef25e4b" podNamespace="calico-system" podName="calico-node-4p684" Aug 5 21:43:55.693792 systemd[1]: Created slice kubepods-besteffort-podf3ce555a_f6d5_406d_8094_29f35ef25e4b.slice - libcontainer container kubepods-besteffort-podf3ce555a_f6d5_406d_8094_29f35ef25e4b.slice. Aug 5 21:43:55.718538 kubelet[3211]: I0805 21:43:55.718491 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-var-run-calico\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718538 kubelet[3211]: I0805 21:43:55.718542 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-cni-log-dir\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718755 kubelet[3211]: I0805 21:43:55.718564 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-cni-net-dir\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718755 kubelet[3211]: I0805 21:43:55.718585 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-flexvol-driver-host\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718755 kubelet[3211]: I0805 21:43:55.718604 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-policysync\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718755 kubelet[3211]: I0805 21:43:55.718626 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-lib-modules\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718755 kubelet[3211]: I0805 21:43:55.718656 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-var-lib-calico\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718871 kubelet[3211]: I0805 21:43:55.718677 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvnd2\" (UniqueName: \"kubernetes.io/projected/f3ce555a-f6d5-406d-8094-29f35ef25e4b-kube-api-access-dvnd2\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718871 kubelet[3211]: I0805 21:43:55.718696 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-xtables-lock\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718871 kubelet[3211]: I0805 21:43:55.718714 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3ce555a-f6d5-406d-8094-29f35ef25e4b-node-certs\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718871 kubelet[3211]: I0805 21:43:55.718734 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3ce555a-f6d5-406d-8094-29f35ef25e4b-cni-bin-dir\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.718871 kubelet[3211]: I0805 21:43:55.718765 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3ce555a-f6d5-406d-8094-29f35ef25e4b-tigera-ca-bundle\") pod \"calico-node-4p684\" (UID: \"f3ce555a-f6d5-406d-8094-29f35ef25e4b\") " pod="calico-system/calico-node-4p684" Aug 5 21:43:55.805990 kubelet[3211]: I0805 21:43:55.805943 3211 topology_manager.go:215] "Topology Admit Handler" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" podNamespace="calico-system" podName="csi-node-driver-grpvp" Aug 5 21:43:55.806258 kubelet[3211]: E0805 21:43:55.806234 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:43:55.819802 kubelet[3211]: I0805 21:43:55.819752 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csrhd\" (UniqueName: \"kubernetes.io/projected/a43c2e97-b54c-4d04-a78d-358682744b6a-kube-api-access-csrhd\") pod \"csi-node-driver-grpvp\" (UID: \"a43c2e97-b54c-4d04-a78d-358682744b6a\") " pod="calico-system/csi-node-driver-grpvp" Aug 5 21:43:55.819949 kubelet[3211]: I0805 21:43:55.819838 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a43c2e97-b54c-4d04-a78d-358682744b6a-kubelet-dir\") pod \"csi-node-driver-grpvp\" (UID: \"a43c2e97-b54c-4d04-a78d-358682744b6a\") " pod="calico-system/csi-node-driver-grpvp" Aug 5 21:43:55.819949 kubelet[3211]: I0805 21:43:55.819872 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a43c2e97-b54c-4d04-a78d-358682744b6a-socket-dir\") pod \"csi-node-driver-grpvp\" (UID: \"a43c2e97-b54c-4d04-a78d-358682744b6a\") " pod="calico-system/csi-node-driver-grpvp" Aug 5 21:43:55.819949 kubelet[3211]: I0805 21:43:55.819916 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a43c2e97-b54c-4d04-a78d-358682744b6a-varrun\") pod \"csi-node-driver-grpvp\" (UID: \"a43c2e97-b54c-4d04-a78d-358682744b6a\") " pod="calico-system/csi-node-driver-grpvp" Aug 5 21:43:55.820028 kubelet[3211]: I0805 21:43:55.819972 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a43c2e97-b54c-4d04-a78d-358682744b6a-registration-dir\") pod \"csi-node-driver-grpvp\" (UID: \"a43c2e97-b54c-4d04-a78d-358682744b6a\") " pod="calico-system/csi-node-driver-grpvp" Aug 5 21:43:55.822246 kubelet[3211]: E0805 21:43:55.822201 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.822246 kubelet[3211]: W0805 21:43:55.822228 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.822246 kubelet[3211]: E0805 21:43:55.822257 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.822840 kubelet[3211]: E0805 21:43:55.822391 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.822840 kubelet[3211]: W0805 21:43:55.822399 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.822840 kubelet[3211]: E0805 21:43:55.822410 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.822840 kubelet[3211]: E0805 21:43:55.822548 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.822840 kubelet[3211]: W0805 21:43:55.822556 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.822840 kubelet[3211]: E0805 21:43:55.822574 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.822840 kubelet[3211]: E0805 21:43:55.822712 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.822840 kubelet[3211]: W0805 21:43:55.822719 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.822840 kubelet[3211]: E0805 21:43:55.822734 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.822840 kubelet[3211]: E0805 21:43:55.822846 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.823060 kubelet[3211]: W0805 21:43:55.822851 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.823060 kubelet[3211]: E0805 21:43:55.822861 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.823060 kubelet[3211]: E0805 21:43:55.823008 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.823060 kubelet[3211]: W0805 21:43:55.823015 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.823060 kubelet[3211]: E0805 21:43:55.823024 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.824149 kubelet[3211]: E0805 21:43:55.823816 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.824149 kubelet[3211]: W0805 21:43:55.823836 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.824149 kubelet[3211]: E0805 21:43:55.823852 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.824149 kubelet[3211]: E0805 21:43:55.824005 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.824149 kubelet[3211]: W0805 21:43:55.824012 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.824149 kubelet[3211]: E0805 21:43:55.824022 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.824149 kubelet[3211]: E0805 21:43:55.824146 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.824149 kubelet[3211]: W0805 21:43:55.824153 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.824149 kubelet[3211]: E0805 21:43:55.824177 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.824475 kubelet[3211]: E0805 21:43:55.824298 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.824475 kubelet[3211]: W0805 21:43:55.824304 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.824475 kubelet[3211]: E0805 21:43:55.824314 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.824475 kubelet[3211]: E0805 21:43:55.824450 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.824475 kubelet[3211]: W0805 21:43:55.824456 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.824475 kubelet[3211]: E0805 21:43:55.824465 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.832551 kubelet[3211]: E0805 21:43:55.832443 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.832551 kubelet[3211]: W0805 21:43:55.832468 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.832551 kubelet[3211]: E0805 21:43:55.832493 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.851900 kubelet[3211]: E0805 21:43:55.851868 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.851900 kubelet[3211]: W0805 21:43:55.851890 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.851900 kubelet[3211]: E0805 21:43:55.851911 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.894690 containerd[1701]: time="2024-08-05T21:43:55.894632401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df87f7c69-kxj2t,Uid:255eadc9-3b6d-4957-bb38-f9d060edc1cf,Namespace:calico-system,Attempt:0,}" Aug 5 21:43:55.921728 kubelet[3211]: E0805 21:43:55.921689 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.921728 kubelet[3211]: W0805 21:43:55.921712 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.921728 kubelet[3211]: E0805 21:43:55.921734 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.922255 kubelet[3211]: E0805 21:43:55.922232 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.922255 kubelet[3211]: W0805 21:43:55.922250 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.922255 kubelet[3211]: E0805 21:43:55.922271 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.922877 kubelet[3211]: E0805 21:43:55.922843 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.922995 kubelet[3211]: W0805 21:43:55.922862 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.923076 kubelet[3211]: E0805 21:43:55.922998 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.926301 kubelet[3211]: E0805 21:43:55.924592 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.926301 kubelet[3211]: W0805 21:43:55.924611 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.926541 kubelet[3211]: E0805 21:43:55.926269 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.927527 kubelet[3211]: E0805 21:43:55.927499 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.927527 kubelet[3211]: W0805 21:43:55.927521 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.927756 kubelet[3211]: E0805 21:43:55.927622 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.929039 kubelet[3211]: E0805 21:43:55.929004 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.929039 kubelet[3211]: W0805 21:43:55.929027 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.929039 kubelet[3211]: E0805 21:43:55.929133 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.930783 kubelet[3211]: E0805 21:43:55.929567 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.930783 kubelet[3211]: W0805 21:43:55.929609 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.930783 kubelet[3211]: E0805 21:43:55.929805 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.930783 kubelet[3211]: W0805 21:43:55.929814 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.930783 kubelet[3211]: E0805 21:43:55.930047 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.930783 kubelet[3211]: W0805 21:43:55.930058 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.930783 kubelet[3211]: E0805 21:43:55.930271 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.930783 kubelet[3211]: W0805 21:43:55.930281 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.930783 kubelet[3211]: E0805 21:43:55.930454 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.930783 kubelet[3211]: W0805 21:43:55.930462 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.931059 kubelet[3211]: E0805 21:43:55.930480 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.931059 kubelet[3211]: E0805 21:43:55.930785 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.931059 kubelet[3211]: W0805 21:43:55.930797 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.931059 kubelet[3211]: E0805 21:43:55.930836 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.931142 kubelet[3211]: E0805 21:43:55.931082 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.931142 kubelet[3211]: W0805 21:43:55.931092 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.931142 kubelet[3211]: E0805 21:43:55.931103 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.932344 kubelet[3211]: E0805 21:43:55.931180 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.932344 kubelet[3211]: E0805 21:43:55.931383 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.932344 kubelet[3211]: W0805 21:43:55.931392 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.932344 kubelet[3211]: E0805 21:43:55.931439 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.932344 kubelet[3211]: E0805 21:43:55.931827 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.932344 kubelet[3211]: E0805 21:43:55.931859 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.932344 kubelet[3211]: E0805 21:43:55.931973 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.933320 kubelet[3211]: E0805 21:43:55.933018 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.933320 kubelet[3211]: W0805 21:43:55.933032 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.933320 kubelet[3211]: E0805 21:43:55.933058 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.933732 kubelet[3211]: E0805 21:43:55.933593 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.933732 kubelet[3211]: W0805 21:43:55.933612 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.933732 kubelet[3211]: E0805 21:43:55.933641 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.934828 kubelet[3211]: E0805 21:43:55.934567 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.934828 kubelet[3211]: W0805 21:43:55.934588 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.934828 kubelet[3211]: E0805 21:43:55.934763 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.934828 kubelet[3211]: W0805 21:43:55.934771 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.934828 kubelet[3211]: E0805 21:43:55.934771 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.934828 kubelet[3211]: E0805 21:43:55.934821 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.935518 kubelet[3211]: E0805 21:43:55.935480 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.935518 kubelet[3211]: W0805 21:43:55.935500 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.935706 kubelet[3211]: E0805 21:43:55.935693 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.936498 kubelet[3211]: E0805 21:43:55.936356 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.936498 kubelet[3211]: W0805 21:43:55.936375 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.936649 kubelet[3211]: E0805 21:43:55.936635 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.937385 kubelet[3211]: E0805 21:43:55.937057 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.937385 kubelet[3211]: W0805 21:43:55.937073 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.937385 kubelet[3211]: E0805 21:43:55.937188 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.937628 kubelet[3211]: E0805 21:43:55.937540 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.937628 kubelet[3211]: W0805 21:43:55.937555 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.937628 kubelet[3211]: E0805 21:43:55.937595 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.938085 kubelet[3211]: E0805 21:43:55.938000 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.938085 kubelet[3211]: W0805 21:43:55.938016 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.939079 kubelet[3211]: E0805 21:43:55.938044 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.939079 kubelet[3211]: E0805 21:43:55.938520 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.939079 kubelet[3211]: W0805 21:43:55.938534 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.939079 kubelet[3211]: E0805 21:43:55.938551 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.940761 kubelet[3211]: E0805 21:43:55.940714 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.940761 kubelet[3211]: W0805 21:43:55.940739 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.940916 kubelet[3211]: E0805 21:43:55.940775 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.955759 containerd[1701]: time="2024-08-05T21:43:55.955483320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:43:55.956915 containerd[1701]: time="2024-08-05T21:43:55.956738082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:55.956915 containerd[1701]: time="2024-08-05T21:43:55.956822482Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:43:55.956915 containerd[1701]: time="2024-08-05T21:43:55.956866602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:55.960040 kubelet[3211]: E0805 21:43:55.959062 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:55.960040 kubelet[3211]: W0805 21:43:55.959313 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:55.960040 kubelet[3211]: E0805 21:43:55.959904 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:55.977414 systemd[1]: Started cri-containerd-1ceeb8f32394874eff89e7fee47e4dffd85091e74329c70bfe727f4de3770976.scope - libcontainer container 1ceeb8f32394874eff89e7fee47e4dffd85091e74329c70bfe727f4de3770976. Aug 5 21:43:56.000939 containerd[1701]: time="2024-08-05T21:43:56.000887608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4p684,Uid:f3ce555a-f6d5-406d-8094-29f35ef25e4b,Namespace:calico-system,Attempt:0,}" Aug 5 21:43:56.027805 containerd[1701]: time="2024-08-05T21:43:56.027544460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7df87f7c69-kxj2t,Uid:255eadc9-3b6d-4957-bb38-f9d060edc1cf,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ceeb8f32394874eff89e7fee47e4dffd85091e74329c70bfe727f4de3770976\"" Aug 5 21:43:56.031387 containerd[1701]: time="2024-08-05T21:43:56.031323188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 21:43:56.052704 containerd[1701]: time="2024-08-05T21:43:56.052437389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:43:56.052704 containerd[1701]: time="2024-08-05T21:43:56.052531189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:56.052704 containerd[1701]: time="2024-08-05T21:43:56.052567589Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:43:56.052704 containerd[1701]: time="2024-08-05T21:43:56.052582589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:43:56.073460 systemd[1]: Started cri-containerd-77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461.scope - libcontainer container 77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461. Aug 5 21:43:56.104985 containerd[1701]: time="2024-08-05T21:43:56.104585291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4p684,Uid:f3ce555a-f6d5-406d-8094-29f35ef25e4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461\"" Aug 5 21:43:57.270950 kubelet[3211]: E0805 21:43:57.270886 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:43:58.249340 containerd[1701]: time="2024-08-05T21:43:58.249277355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:58.252938 containerd[1701]: time="2024-08-05T21:43:58.252847282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Aug 5 21:43:58.256316 containerd[1701]: time="2024-08-05T21:43:58.256251529Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:58.261877 containerd[1701]: time="2024-08-05T21:43:58.261715100Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:58.262873 containerd[1701]: time="2024-08-05T21:43:58.262718582Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.231350874s" Aug 5 21:43:58.262873 containerd[1701]: time="2024-08-05T21:43:58.262760262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Aug 5 21:43:58.263762 containerd[1701]: time="2024-08-05T21:43:58.263475383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 21:43:58.277583 containerd[1701]: time="2024-08-05T21:43:58.277532851Z" level=info msg="CreateContainer within sandbox \"1ceeb8f32394874eff89e7fee47e4dffd85091e74329c70bfe727f4de3770976\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 21:43:58.316470 containerd[1701]: time="2024-08-05T21:43:58.316415446Z" level=info msg="CreateContainer within sandbox \"1ceeb8f32394874eff89e7fee47e4dffd85091e74329c70bfe727f4de3770976\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5935c931707720315732cb0eafa5fa610240967131caadf671523817a7efd144\"" Aug 5 21:43:58.318141 containerd[1701]: time="2024-08-05T21:43:58.316980488Z" level=info msg="StartContainer for \"5935c931707720315732cb0eafa5fa610240967131caadf671523817a7efd144\"" Aug 5 21:43:58.349423 systemd[1]: Started cri-containerd-5935c931707720315732cb0eafa5fa610240967131caadf671523817a7efd144.scope - libcontainer container 5935c931707720315732cb0eafa5fa610240967131caadf671523817a7efd144. Aug 5 21:43:58.392279 containerd[1701]: time="2024-08-05T21:43:58.392101194Z" level=info msg="StartContainer for \"5935c931707720315732cb0eafa5fa610240967131caadf671523817a7efd144\" returns successfully" Aug 5 21:43:59.279541 kubelet[3211]: E0805 21:43:59.279438 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:43:59.396627 kubelet[3211]: I0805 21:43:59.395792 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-7df87f7c69-kxj2t" podStartSLOduration=2.163156315 podStartE2EDuration="4.395624832s" podCreationTimestamp="2024-08-05 21:43:55 +0000 UTC" firstStartedPulling="2024-08-05 21:43:56.030702546 +0000 UTC m=+22.875694268" lastFinishedPulling="2024-08-05 21:43:58.263171063 +0000 UTC m=+25.108162785" observedRunningTime="2024-08-05 21:43:59.395516112 +0000 UTC m=+26.240507834" watchObservedRunningTime="2024-08-05 21:43:59.395624832 +0000 UTC m=+26.240616554" Aug 5 21:43:59.437140 kubelet[3211]: E0805 21:43:59.437099 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.437140 kubelet[3211]: W0805 21:43:59.437124 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.437140 kubelet[3211]: E0805 21:43:59.437146 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.437473 kubelet[3211]: E0805 21:43:59.437322 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.437473 kubelet[3211]: W0805 21:43:59.437329 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.437473 kubelet[3211]: E0805 21:43:59.437340 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.437473 kubelet[3211]: E0805 21:43:59.437461 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.437473 kubelet[3211]: W0805 21:43:59.437468 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.437473 kubelet[3211]: E0805 21:43:59.437478 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.437734 kubelet[3211]: E0805 21:43:59.437602 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.437734 kubelet[3211]: W0805 21:43:59.437609 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.437734 kubelet[3211]: E0805 21:43:59.437618 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.437734 kubelet[3211]: E0805 21:43:59.437746 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.437905 kubelet[3211]: W0805 21:43:59.437753 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.437905 kubelet[3211]: E0805 21:43:59.437763 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.437905 kubelet[3211]: E0805 21:43:59.437876 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.437905 kubelet[3211]: W0805 21:43:59.437882 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.437905 kubelet[3211]: E0805 21:43:59.437891 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.438076 kubelet[3211]: E0805 21:43:59.438004 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.438076 kubelet[3211]: W0805 21:43:59.438010 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.438076 kubelet[3211]: E0805 21:43:59.438019 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.438204 kubelet[3211]: E0805 21:43:59.438128 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.438204 kubelet[3211]: W0805 21:43:59.438135 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.438204 kubelet[3211]: E0805 21:43:59.438144 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.438358 kubelet[3211]: E0805 21:43:59.438340 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.438358 kubelet[3211]: W0805 21:43:59.438355 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.438433 kubelet[3211]: E0805 21:43:59.438366 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.438512 kubelet[3211]: E0805 21:43:59.438498 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.438512 kubelet[3211]: W0805 21:43:59.438509 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.438593 kubelet[3211]: E0805 21:43:59.438521 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.438657 kubelet[3211]: E0805 21:43:59.438644 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.438657 kubelet[3211]: W0805 21:43:59.438650 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.438707 kubelet[3211]: E0805 21:43:59.438661 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.438788 kubelet[3211]: E0805 21:43:59.438775 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.438788 kubelet[3211]: W0805 21:43:59.438785 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.438865 kubelet[3211]: E0805 21:43:59.438795 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.438935 kubelet[3211]: E0805 21:43:59.438922 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.438935 kubelet[3211]: W0805 21:43:59.438932 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.439096 kubelet[3211]: E0805 21:43:59.438942 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.439096 kubelet[3211]: E0805 21:43:59.439061 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.439096 kubelet[3211]: W0805 21:43:59.439068 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.439096 kubelet[3211]: E0805 21:43:59.439077 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.439279 kubelet[3211]: E0805 21:43:59.439213 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.439279 kubelet[3211]: W0805 21:43:59.439220 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.439279 kubelet[3211]: E0805 21:43:59.439231 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.455939 kubelet[3211]: E0805 21:43:59.455848 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.455939 kubelet[3211]: W0805 21:43:59.455873 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.455939 kubelet[3211]: E0805 21:43:59.455894 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.456535 kubelet[3211]: E0805 21:43:59.456367 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.456535 kubelet[3211]: W0805 21:43:59.456382 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.456535 kubelet[3211]: E0805 21:43:59.456406 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.456751 kubelet[3211]: E0805 21:43:59.456694 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.456751 kubelet[3211]: W0805 21:43:59.456705 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.456751 kubelet[3211]: E0805 21:43:59.456729 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.457072 kubelet[3211]: E0805 21:43:59.457053 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.457072 kubelet[3211]: W0805 21:43:59.457071 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.458235 kubelet[3211]: E0805 21:43:59.458203 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.458620 kubelet[3211]: E0805 21:43:59.458606 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.458784 kubelet[3211]: W0805 21:43:59.458686 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.458898 kubelet[3211]: E0805 21:43:59.458839 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.459229 kubelet[3211]: E0805 21:43:59.459132 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.459229 kubelet[3211]: W0805 21:43:59.459143 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.459527 kubelet[3211]: E0805 21:43:59.459405 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.459527 kubelet[3211]: E0805 21:43:59.459470 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.459700 kubelet[3211]: W0805 21:43:59.459615 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.459700 kubelet[3211]: E0805 21:43:59.459645 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.460050 kubelet[3211]: E0805 21:43:59.459979 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.460050 kubelet[3211]: W0805 21:43:59.459992 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.460050 kubelet[3211]: E0805 21:43:59.460011 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.460366 kubelet[3211]: E0805 21:43:59.460354 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.460470 kubelet[3211]: W0805 21:43:59.460416 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.460579 kubelet[3211]: E0805 21:43:59.460508 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.460883 kubelet[3211]: E0805 21:43:59.460759 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.460883 kubelet[3211]: W0805 21:43:59.460772 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.461042 kubelet[3211]: E0805 21:43:59.461007 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.461197 kubelet[3211]: E0805 21:43:59.461120 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.461197 kubelet[3211]: W0805 21:43:59.461130 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.461472 kubelet[3211]: E0805 21:43:59.461282 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.461718 kubelet[3211]: E0805 21:43:59.461706 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.461797 kubelet[3211]: W0805 21:43:59.461785 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.462050 kubelet[3211]: E0805 21:43:59.461847 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.462218 kubelet[3211]: E0805 21:43:59.462205 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.462288 kubelet[3211]: W0805 21:43:59.462277 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.462388 kubelet[3211]: E0805 21:43:59.462364 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.462606 kubelet[3211]: E0805 21:43:59.462593 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.462745 kubelet[3211]: W0805 21:43:59.462664 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.462814 kubelet[3211]: E0805 21:43:59.462805 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.463077 kubelet[3211]: E0805 21:43:59.462967 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.463077 kubelet[3211]: W0805 21:43:59.462978 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.463077 kubelet[3211]: E0805 21:43:59.462996 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.463285 kubelet[3211]: E0805 21:43:59.463274 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.463403 kubelet[3211]: W0805 21:43:59.463333 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.463403 kubelet[3211]: E0805 21:43:59.463349 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.463899 kubelet[3211]: E0805 21:43:59.463622 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.463899 kubelet[3211]: W0805 21:43:59.463633 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.463899 kubelet[3211]: E0805 21:43:59.463647 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.464222 kubelet[3211]: E0805 21:43:59.464207 3211 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 21:43:59.464299 kubelet[3211]: W0805 21:43:59.464289 3211 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 21:43:59.464348 kubelet[3211]: E0805 21:43:59.464341 3211 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 21:43:59.752882 containerd[1701]: time="2024-08-05T21:43:59.752817889Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:59.755650 containerd[1701]: time="2024-08-05T21:43:59.755608255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Aug 5 21:43:59.759722 containerd[1701]: time="2024-08-05T21:43:59.759614303Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:59.764662 containerd[1701]: time="2024-08-05T21:43:59.764608072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:43:59.765542 containerd[1701]: time="2024-08-05T21:43:59.765429474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.501921291s" Aug 5 21:43:59.765542 containerd[1701]: time="2024-08-05T21:43:59.765465874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Aug 5 21:43:59.768758 containerd[1701]: time="2024-08-05T21:43:59.768595360Z" level=info msg="CreateContainer within sandbox \"77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 21:43:59.808619 containerd[1701]: time="2024-08-05T21:43:59.808568558Z" level=info msg="CreateContainer within sandbox \"77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6\"" Aug 5 21:43:59.810355 containerd[1701]: time="2024-08-05T21:43:59.809413080Z" level=info msg="StartContainer for \"5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6\"" Aug 5 21:43:59.843476 systemd[1]: Started cri-containerd-5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6.scope - libcontainer container 5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6. Aug 5 21:43:59.889605 containerd[1701]: time="2024-08-05T21:43:59.889546156Z" level=info msg="StartContainer for \"5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6\" returns successfully" Aug 5 21:43:59.901949 systemd[1]: cri-containerd-5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6.scope: Deactivated successfully. Aug 5 21:43:59.928834 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6-rootfs.mount: Deactivated successfully. Aug 5 21:44:00.390678 kubelet[3211]: I0805 21:44:00.389868 3211 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:44:01.271771 kubelet[3211]: E0805 21:44:01.271521 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:03.269783 kubelet[3211]: E0805 21:44:03.269489 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:04.379442 containerd[1701]: time="2024-08-05T21:44:04.379353325Z" level=info msg="shim disconnected" id=5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6 namespace=k8s.io Aug 5 21:44:04.379442 containerd[1701]: time="2024-08-05T21:44:04.379412765Z" level=warning msg="cleaning up after shim disconnected" id=5d94a08123c53bd32da4f6cce6d98ea610a67c5e21f7b103ce276d63d22599c6 namespace=k8s.io Aug 5 21:44:04.379442 containerd[1701]: time="2024-08-05T21:44:04.379421525Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:44:04.391220 containerd[1701]: time="2024-08-05T21:44:04.390315026Z" level=warning msg="cleanup warnings time=\"2024-08-05T21:44:04Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 5 21:44:04.429730 containerd[1701]: time="2024-08-05T21:44:04.429673943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 21:44:05.270320 kubelet[3211]: E0805 21:44:05.270283 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:07.270021 kubelet[3211]: E0805 21:44:07.269794 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:09.270344 kubelet[3211]: E0805 21:44:09.269520 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:10.716437 kubelet[3211]: I0805 21:44:10.716131 3211 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 21:44:11.270082 kubelet[3211]: E0805 21:44:11.270045 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:13.173990 containerd[1701]: time="2024-08-05T21:44:13.173934537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:13.220701 containerd[1701]: time="2024-08-05T21:44:13.220636067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Aug 5 21:44:13.268003 containerd[1701]: time="2024-08-05T21:44:13.267058077Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:13.270899 kubelet[3211]: E0805 21:44:13.270478 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:13.315501 containerd[1701]: time="2024-08-05T21:44:13.315458731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:13.316351 containerd[1701]: time="2024-08-05T21:44:13.316193972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 8.885835027s" Aug 5 21:44:13.316351 containerd[1701]: time="2024-08-05T21:44:13.316233092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Aug 5 21:44:13.318747 containerd[1701]: time="2024-08-05T21:44:13.318691977Z" level=info msg="CreateContainer within sandbox \"77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 21:44:13.677125 containerd[1701]: time="2024-08-05T21:44:13.677043471Z" level=info msg="CreateContainer within sandbox \"77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5\"" Aug 5 21:44:13.678836 containerd[1701]: time="2024-08-05T21:44:13.677542952Z" level=info msg="StartContainer for \"47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5\"" Aug 5 21:44:13.714611 systemd[1]: Started cri-containerd-47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5.scope - libcontainer container 47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5. Aug 5 21:44:14.120065 containerd[1701]: time="2024-08-05T21:44:14.119892449Z" level=info msg="StartContainer for \"47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5\" returns successfully" Aug 5 21:44:15.270027 kubelet[3211]: E0805 21:44:15.269697 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:17.270254 kubelet[3211]: E0805 21:44:17.269735 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:17.350063 containerd[1701]: time="2024-08-05T21:44:17.350010548Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 21:44:17.352721 systemd[1]: cri-containerd-47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5.scope: Deactivated successfully. Aug 5 21:44:17.372780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5-rootfs.mount: Deactivated successfully. Aug 5 21:44:17.405859 kubelet[3211]: I0805 21:44:17.405823 3211 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Aug 5 21:44:17.666330 kubelet[3211]: I0805 21:44:17.424786 3211 topology_manager.go:215] "Topology Admit Handler" podUID="2e6ea574-1f1c-4e52-898c-5cca482be0c0" podNamespace="kube-system" podName="coredns-76f75df574-gj2wt" Aug 5 21:44:17.666330 kubelet[3211]: I0805 21:44:17.430249 3211 topology_manager.go:215] "Topology Admit Handler" podUID="c9a0c74c-5165-4eb9-a79b-c4b8e106b10a" podNamespace="kube-system" podName="coredns-76f75df574-h95c8" Aug 5 21:44:17.666330 kubelet[3211]: I0805 21:44:17.443505 3211 topology_manager.go:215] "Topology Admit Handler" podUID="055db446-940a-4e53-beae-809dff8b6ada" podNamespace="calico-system" podName="calico-kube-controllers-5f9f5bd8b5-h2rwc" Aug 5 21:44:17.666330 kubelet[3211]: I0805 21:44:17.477043 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/055db446-940a-4e53-beae-809dff8b6ada-tigera-ca-bundle\") pod \"calico-kube-controllers-5f9f5bd8b5-h2rwc\" (UID: \"055db446-940a-4e53-beae-809dff8b6ada\") " pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" Aug 5 21:44:17.666330 kubelet[3211]: I0805 21:44:17.477088 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdkg\" (UniqueName: \"kubernetes.io/projected/055db446-940a-4e53-beae-809dff8b6ada-kube-api-access-9hdkg\") pod \"calico-kube-controllers-5f9f5bd8b5-h2rwc\" (UID: \"055db446-940a-4e53-beae-809dff8b6ada\") " pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" Aug 5 21:44:17.666330 kubelet[3211]: I0805 21:44:17.477116 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9a0c74c-5165-4eb9-a79b-c4b8e106b10a-config-volume\") pod \"coredns-76f75df574-h95c8\" (UID: \"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a\") " pod="kube-system/coredns-76f75df574-h95c8" Aug 5 21:44:17.442492 systemd[1]: Created slice kubepods-burstable-pod2e6ea574_1f1c_4e52_898c_5cca482be0c0.slice - libcontainer container kubepods-burstable-pod2e6ea574_1f1c_4e52_898c_5cca482be0c0.slice. Aug 5 21:44:17.666681 kubelet[3211]: I0805 21:44:17.477138 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e6ea574-1f1c-4e52-898c-5cca482be0c0-config-volume\") pod \"coredns-76f75df574-gj2wt\" (UID: \"2e6ea574-1f1c-4e52-898c-5cca482be0c0\") " pod="kube-system/coredns-76f75df574-gj2wt" Aug 5 21:44:17.666681 kubelet[3211]: I0805 21:44:17.477186 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k422g\" (UniqueName: \"kubernetes.io/projected/c9a0c74c-5165-4eb9-a79b-c4b8e106b10a-kube-api-access-k422g\") pod \"coredns-76f75df574-h95c8\" (UID: \"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a\") " pod="kube-system/coredns-76f75df574-h95c8" Aug 5 21:44:17.666681 kubelet[3211]: I0805 21:44:17.477212 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwsfb\" (UniqueName: \"kubernetes.io/projected/2e6ea574-1f1c-4e52-898c-5cca482be0c0-kube-api-access-nwsfb\") pod \"coredns-76f75df574-gj2wt\" (UID: \"2e6ea574-1f1c-4e52-898c-5cca482be0c0\") " pod="kube-system/coredns-76f75df574-gj2wt" Aug 5 21:44:17.447648 systemd[1]: Created slice kubepods-burstable-podc9a0c74c_5165_4eb9_a79b_c4b8e106b10a.slice - libcontainer container kubepods-burstable-podc9a0c74c_5165_4eb9_a79b_c4b8e106b10a.slice. Aug 5 21:44:17.460653 systemd[1]: Created slice kubepods-besteffort-pod055db446_940a_4e53_beae_809dff8b6ada.slice - libcontainer container kubepods-besteffort-pod055db446_940a_4e53_beae_809dff8b6ada.slice. Aug 5 21:44:17.969984 containerd[1701]: time="2024-08-05T21:44:17.969852149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj2wt,Uid:2e6ea574-1f1c-4e52-898c-5cca482be0c0,Namespace:kube-system,Attempt:0,}" Aug 5 21:44:17.970962 containerd[1701]: time="2024-08-05T21:44:17.970696911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9f5bd8b5-h2rwc,Uid:055db446-940a-4e53-beae-809dff8b6ada,Namespace:calico-system,Attempt:0,}" Aug 5 21:44:17.973974 containerd[1701]: time="2024-08-05T21:44:17.973710756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h95c8,Uid:c9a0c74c-5165-4eb9-a79b-c4b8e106b10a,Namespace:kube-system,Attempt:0,}" Aug 5 21:44:18.245957 containerd[1701]: time="2024-08-05T21:44:18.245532203Z" level=info msg="shim disconnected" id=47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5 namespace=k8s.io Aug 5 21:44:18.245957 containerd[1701]: time="2024-08-05T21:44:18.245594963Z" level=warning msg="cleaning up after shim disconnected" id=47f275106a0cb8989011c14a0ef20eb49407910b818c127ed0e1722b602684e5 namespace=k8s.io Aug 5 21:44:18.245957 containerd[1701]: time="2024-08-05T21:44:18.245625643Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 21:44:18.397833 containerd[1701]: time="2024-08-05T21:44:18.394297891Z" level=error msg="Failed to destroy network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.398615 containerd[1701]: time="2024-08-05T21:44:18.398453299Z" level=error msg="encountered an error cleaning up failed sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.398615 containerd[1701]: time="2024-08-05T21:44:18.398523820Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj2wt,Uid:2e6ea574-1f1c-4e52-898c-5cca482be0c0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.399644 kubelet[3211]: E0805 21:44:18.398758 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.399644 kubelet[3211]: E0805 21:44:18.398823 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gj2wt" Aug 5 21:44:18.399644 kubelet[3211]: E0805 21:44:18.398845 3211 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-gj2wt" Aug 5 21:44:18.399007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1-shm.mount: Deactivated successfully. Aug 5 21:44:18.402760 kubelet[3211]: E0805 21:44:18.398913 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-gj2wt_kube-system(2e6ea574-1f1c-4e52-898c-5cca482be0c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-gj2wt_kube-system(2e6ea574-1f1c-4e52-898c-5cca482be0c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gj2wt" podUID="2e6ea574-1f1c-4e52-898c-5cca482be0c0" Aug 5 21:44:18.411129 containerd[1701]: time="2024-08-05T21:44:18.411075644Z" level=error msg="Failed to destroy network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.413499 containerd[1701]: time="2024-08-05T21:44:18.413334968Z" level=error msg="encountered an error cleaning up failed sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.413499 containerd[1701]: time="2024-08-05T21:44:18.413411368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9f5bd8b5-h2rwc,Uid:055db446-940a-4e53-beae-809dff8b6ada,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.413673 kubelet[3211]: E0805 21:44:18.413632 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.414056 kubelet[3211]: E0805 21:44:18.414032 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" Aug 5 21:44:18.414134 kubelet[3211]: E0805 21:44:18.414068 3211 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" Aug 5 21:44:18.414184 kubelet[3211]: E0805 21:44:18.414133 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f9f5bd8b5-h2rwc_calico-system(055db446-940a-4e53-beae-809dff8b6ada)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f9f5bd8b5-h2rwc_calico-system(055db446-940a-4e53-beae-809dff8b6ada)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" podUID="055db446-940a-4e53-beae-809dff8b6ada" Aug 5 21:44:18.415147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2-shm.mount: Deactivated successfully. Aug 5 21:44:18.420257 containerd[1701]: time="2024-08-05T21:44:18.420212542Z" level=error msg="Failed to destroy network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.420836 containerd[1701]: time="2024-08-05T21:44:18.420701302Z" level=error msg="encountered an error cleaning up failed sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.420836 containerd[1701]: time="2024-08-05T21:44:18.420752903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h95c8,Uid:c9a0c74c-5165-4eb9-a79b-c4b8e106b10a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.421108 kubelet[3211]: E0805 21:44:18.421086 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.421563 kubelet[3211]: E0805 21:44:18.421266 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-h95c8" Aug 5 21:44:18.421563 kubelet[3211]: E0805 21:44:18.421292 3211 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-h95c8" Aug 5 21:44:18.421563 kubelet[3211]: E0805 21:44:18.421346 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-h95c8_kube-system(c9a0c74c-5165-4eb9-a79b-c4b8e106b10a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-h95c8_kube-system(c9a0c74c-5165-4eb9-a79b-c4b8e106b10a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-h95c8" podUID="c9a0c74c-5165-4eb9-a79b-c4b8e106b10a" Aug 5 21:44:18.423608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505-shm.mount: Deactivated successfully. Aug 5 21:44:18.458722 kubelet[3211]: I0805 21:44:18.458458 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:44:18.459526 containerd[1701]: time="2024-08-05T21:44:18.459470098Z" level=info msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\"" Aug 5 21:44:18.460931 kubelet[3211]: I0805 21:44:18.460316 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:44:18.461026 containerd[1701]: time="2024-08-05T21:44:18.460721020Z" level=info msg="Ensure that sandbox 4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1 in task-service has been cleanup successfully" Aug 5 21:44:18.461026 containerd[1701]: time="2024-08-05T21:44:18.460778060Z" level=info msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\"" Aug 5 21:44:18.461213 containerd[1701]: time="2024-08-05T21:44:18.461107141Z" level=info msg="Ensure that sandbox a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505 in task-service has been cleanup successfully" Aug 5 21:44:18.468829 containerd[1701]: time="2024-08-05T21:44:18.468778636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 21:44:18.471085 kubelet[3211]: I0805 21:44:18.470430 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:44:18.471552 containerd[1701]: time="2024-08-05T21:44:18.471515881Z" level=info msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\"" Aug 5 21:44:18.471769 containerd[1701]: time="2024-08-05T21:44:18.471743401Z" level=info msg="Ensure that sandbox 34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2 in task-service has been cleanup successfully" Aug 5 21:44:18.521799 containerd[1701]: time="2024-08-05T21:44:18.521684778Z" level=error msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" failed" error="failed to destroy network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.522586 kubelet[3211]: E0805 21:44:18.522399 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:44:18.522586 kubelet[3211]: E0805 21:44:18.522492 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505"} Aug 5 21:44:18.522586 kubelet[3211]: E0805 21:44:18.522540 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:18.522813 kubelet[3211]: E0805 21:44:18.522574 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-h95c8" podUID="c9a0c74c-5165-4eb9-a79b-c4b8e106b10a" Aug 5 21:44:18.526852 containerd[1701]: time="2024-08-05T21:44:18.526626388Z" level=error msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" failed" error="failed to destroy network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.527297 kubelet[3211]: E0805 21:44:18.527023 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:44:18.527297 kubelet[3211]: E0805 21:44:18.527064 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1"} Aug 5 21:44:18.527297 kubelet[3211]: E0805 21:44:18.527101 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e6ea574-1f1c-4e52-898c-5cca482be0c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:18.527297 kubelet[3211]: E0805 21:44:18.527141 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e6ea574-1f1c-4e52-898c-5cca482be0c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gj2wt" podUID="2e6ea574-1f1c-4e52-898c-5cca482be0c0" Aug 5 21:44:18.528568 containerd[1701]: time="2024-08-05T21:44:18.528527351Z" level=error msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" failed" error="failed to destroy network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:18.528806 kubelet[3211]: E0805 21:44:18.528786 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:44:18.529000 kubelet[3211]: E0805 21:44:18.528894 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2"} Aug 5 21:44:18.529000 kubelet[3211]: E0805 21:44:18.528945 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"055db446-940a-4e53-beae-809dff8b6ada\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:18.529000 kubelet[3211]: E0805 21:44:18.528977 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"055db446-940a-4e53-beae-809dff8b6ada\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" podUID="055db446-940a-4e53-beae-809dff8b6ada" Aug 5 21:44:19.275826 systemd[1]: Created slice kubepods-besteffort-poda43c2e97_b54c_4d04_a78d_358682744b6a.slice - libcontainer container kubepods-besteffort-poda43c2e97_b54c_4d04_a78d_358682744b6a.slice. Aug 5 21:44:19.278965 containerd[1701]: time="2024-08-05T21:44:19.278915909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grpvp,Uid:a43c2e97-b54c-4d04-a78d-358682744b6a,Namespace:calico-system,Attempt:0,}" Aug 5 21:44:20.014151 containerd[1701]: time="2024-08-05T21:44:20.012196151Z" level=error msg="Failed to destroy network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:20.014194 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a-shm.mount: Deactivated successfully. Aug 5 21:44:20.015645 containerd[1701]: time="2024-08-05T21:44:20.015001557Z" level=error msg="encountered an error cleaning up failed sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:20.015645 containerd[1701]: time="2024-08-05T21:44:20.015070597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grpvp,Uid:a43c2e97-b54c-4d04-a78d-358682744b6a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:20.015855 kubelet[3211]: E0805 21:44:20.015297 3211 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:20.015855 kubelet[3211]: E0805 21:44:20.015344 3211 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-grpvp" Aug 5 21:44:20.015855 kubelet[3211]: E0805 21:44:20.015370 3211 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-grpvp" Aug 5 21:44:20.016222 kubelet[3211]: E0805 21:44:20.015419 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-grpvp_calico-system(a43c2e97-b54c-4d04-a78d-358682744b6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-grpvp_calico-system(a43c2e97-b54c-4d04-a78d-358682744b6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:20.474856 kubelet[3211]: I0805 21:44:20.474826 3211 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:44:20.475707 containerd[1701]: time="2024-08-05T21:44:20.475668413Z" level=info msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\"" Aug 5 21:44:20.475914 containerd[1701]: time="2024-08-05T21:44:20.475882093Z" level=info msg="Ensure that sandbox 776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a in task-service has been cleanup successfully" Aug 5 21:44:20.499930 containerd[1701]: time="2024-08-05T21:44:20.499876898Z" level=error msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" failed" error="failed to destroy network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:20.500168 kubelet[3211]: E0805 21:44:20.500133 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:44:20.500242 kubelet[3211]: E0805 21:44:20.500197 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a"} Aug 5 21:44:20.500242 kubelet[3211]: E0805 21:44:20.500233 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a43c2e97-b54c-4d04-a78d-358682744b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:20.500332 kubelet[3211]: E0805 21:44:20.500260 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a43c2e97-b54c-4d04-a78d-358682744b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:30.751190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922881754.mount: Deactivated successfully. Aug 5 21:44:32.270399 containerd[1701]: time="2024-08-05T21:44:32.270129641Z" level=info msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\"" Aug 5 21:44:32.271394 containerd[1701]: time="2024-08-05T21:44:32.270129681Z" level=info msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\"" Aug 5 21:44:32.301004 containerd[1701]: time="2024-08-05T21:44:32.300906132Z" level=error msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" failed" error="failed to destroy network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:32.301786 kubelet[3211]: E0805 21:44:32.301640 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:44:32.301786 kubelet[3211]: E0805 21:44:32.301702 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a"} Aug 5 21:44:32.301786 kubelet[3211]: E0805 21:44:32.301742 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a43c2e97-b54c-4d04-a78d-358682744b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:32.301786 kubelet[3211]: E0805 21:44:32.301783 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a43c2e97-b54c-4d04-a78d-358682744b6a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-grpvp" podUID="a43c2e97-b54c-4d04-a78d-358682744b6a" Aug 5 21:44:32.302307 containerd[1701]: time="2024-08-05T21:44:32.301719174Z" level=error msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" failed" error="failed to destroy network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:32.302347 kubelet[3211]: E0805 21:44:32.301919 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:44:32.302347 kubelet[3211]: E0805 21:44:32.301962 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1"} Aug 5 21:44:32.302347 kubelet[3211]: E0805 21:44:32.301997 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e6ea574-1f1c-4e52-898c-5cca482be0c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:32.302347 kubelet[3211]: E0805 21:44:32.302043 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e6ea574-1f1c-4e52-898c-5cca482be0c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-gj2wt" podUID="2e6ea574-1f1c-4e52-898c-5cca482be0c0" Aug 5 21:44:33.289253 containerd[1701]: time="2024-08-05T21:44:33.287944967Z" level=info msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\"" Aug 5 21:44:33.780445 containerd[1701]: time="2024-08-05T21:44:33.291439373Z" level=info msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\"" Aug 5 21:44:33.782759 containerd[1701]: time="2024-08-05T21:44:33.782623899Z" level=error msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" failed" error="failed to destroy network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:33.782885 kubelet[3211]: E0805 21:44:33.782867 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:44:33.783153 kubelet[3211]: E0805 21:44:33.782906 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2"} Aug 5 21:44:33.783153 kubelet[3211]: E0805 21:44:33.782943 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"055db446-940a-4e53-beae-809dff8b6ada\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:33.783153 kubelet[3211]: E0805 21:44:33.782970 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"055db446-940a-4e53-beae-809dff8b6ada\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" podUID="055db446-940a-4e53-beae-809dff8b6ada" Aug 5 21:44:33.783578 containerd[1701]: time="2024-08-05T21:44:33.783510581Z" level=error msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" failed" error="failed to destroy network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 21:44:33.783701 kubelet[3211]: E0805 21:44:33.783681 3211 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:44:33.783733 kubelet[3211]: E0805 21:44:33.783714 3211 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505"} Aug 5 21:44:33.783756 kubelet[3211]: E0805 21:44:33.783743 3211 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 21:44:33.783860 kubelet[3211]: E0805 21:44:33.783767 3211 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-h95c8" podUID="c9a0c74c-5165-4eb9-a79b-c4b8e106b10a" Aug 5 21:44:34.118267 containerd[1701]: time="2024-08-05T21:44:34.117826826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:34.120128 containerd[1701]: time="2024-08-05T21:44:34.120083950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Aug 5 21:44:34.181986 containerd[1701]: time="2024-08-05T21:44:34.181946869Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:34.228225 containerd[1701]: time="2024-08-05T21:44:34.228145478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:34.229332 containerd[1701]: time="2024-08-05T21:44:34.228799560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 15.759785964s" Aug 5 21:44:34.229332 containerd[1701]: time="2024-08-05T21:44:34.228835920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Aug 5 21:44:34.242130 containerd[1701]: time="2024-08-05T21:44:34.241739305Z" level=info msg="CreateContainer within sandbox \"77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 21:44:34.575638 containerd[1701]: time="2024-08-05T21:44:34.575588269Z" level=info msg="CreateContainer within sandbox \"77bedd04158088dda846041b8716ab378e5d17b609281d37f813283e5fe2d461\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"18a843451119ad0c4231b3c6f1016290cdcfcb07d40076710f66f3207f35ec58\"" Aug 5 21:44:34.576321 containerd[1701]: time="2024-08-05T21:44:34.576285030Z" level=info msg="StartContainer for \"18a843451119ad0c4231b3c6f1016290cdcfcb07d40076710f66f3207f35ec58\"" Aug 5 21:44:34.606373 systemd[1]: Started cri-containerd-18a843451119ad0c4231b3c6f1016290cdcfcb07d40076710f66f3207f35ec58.scope - libcontainer container 18a843451119ad0c4231b3c6f1016290cdcfcb07d40076710f66f3207f35ec58. Aug 5 21:44:34.635370 containerd[1701]: time="2024-08-05T21:44:34.635311984Z" level=info msg="StartContainer for \"18a843451119ad0c4231b3c6f1016290cdcfcb07d40076710f66f3207f35ec58\" returns successfully" Aug 5 21:44:35.181949 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 21:44:35.182086 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 21:44:35.525361 kubelet[3211]: I0805 21:44:35.524700 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4p684" podStartSLOduration=2.402403385 podStartE2EDuration="40.52465749s" podCreationTimestamp="2024-08-05 21:43:55 +0000 UTC" firstStartedPulling="2024-08-05 21:43:56.106771375 +0000 UTC m=+22.951763057" lastFinishedPulling="2024-08-05 21:44:34.22902544 +0000 UTC m=+61.074017162" observedRunningTime="2024-08-05 21:44:35.52465605 +0000 UTC m=+62.369647772" watchObservedRunningTime="2024-08-05 21:44:35.52465749 +0000 UTC m=+62.369649212" Aug 5 21:44:37.860368 systemd-networkd[1451]: vxlan.calico: Link UP Aug 5 21:44:37.860379 systemd-networkd[1451]: vxlan.calico: Gained carrier Aug 5 21:44:39.788414 systemd-networkd[1451]: vxlan.calico: Gained IPv6LL Aug 5 21:44:43.272597 containerd[1701]: time="2024-08-05T21:44:43.272467648Z" level=info msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\"" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.329 [INFO][4536] k8s.go 608: Cleaning up netns ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.329 [INFO][4536] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" iface="eth0" netns="/var/run/netns/cni-4c5de742-00dc-1e89-0915-8092413ec75a" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.331 [INFO][4536] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" iface="eth0" netns="/var/run/netns/cni-4c5de742-00dc-1e89-0915-8092413ec75a" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.331 [INFO][4536] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" iface="eth0" netns="/var/run/netns/cni-4c5de742-00dc-1e89-0915-8092413ec75a" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.331 [INFO][4536] k8s.go 615: Releasing IP address(es) ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.332 [INFO][4536] utils.go 188: Calico CNI releasing IP address ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.358 [INFO][4542] ipam_plugin.go 411: Releasing address using handleID ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.358 [INFO][4542] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.358 [INFO][4542] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.367 [WARNING][4542] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.367 [INFO][4542] ipam_plugin.go 439: Releasing address using workloadID ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.369 [INFO][4542] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:43.372626 containerd[1701]: 2024-08-05 21:44:43.370 [INFO][4536] k8s.go 621: Teardown processing complete. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:44:43.372626 containerd[1701]: time="2024-08-05T21:44:43.373376408Z" level=info msg="TearDown network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" successfully" Aug 5 21:44:43.372626 containerd[1701]: time="2024-08-05T21:44:43.373435928Z" level=info msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" returns successfully" Aug 5 21:44:43.375575 systemd[1]: run-netns-cni\x2d4c5de742\x2d00dc\x2d1e89\x2d0915\x2d8092413ec75a.mount: Deactivated successfully. Aug 5 21:44:43.378042 containerd[1701]: time="2024-08-05T21:44:43.377619814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grpvp,Uid:a43c2e97-b54c-4d04-a78d-358682744b6a,Namespace:calico-system,Attempt:1,}" Aug 5 21:44:43.604622 systemd-networkd[1451]: cali980cd03338d: Link UP Aug 5 21:44:43.605591 systemd-networkd[1451]: cali980cd03338d: Gained carrier Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.461 [INFO][4552] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0 csi-node-driver- calico-system a43c2e97-b54c-4d04-a78d-358682744b6a 774 0 2024-08-05 21:43:55 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.1.0-a-183bdb833d csi-node-driver-grpvp eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali980cd03338d [] []}} ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.461 [INFO][4552] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.537 [INFO][4559] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" HandleID="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.556 [INFO][4559] ipam_plugin.go 264: Auto assigning IP ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" HandleID="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004fb6f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.1.0-a-183bdb833d", "pod":"csi-node-driver-grpvp", "timestamp":"2024-08-05 21:44:43.537706788 +0000 UTC"}, Hostname:"ci-4012.1.0-a-183bdb833d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.556 [INFO][4559] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.556 [INFO][4559] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.558 [INFO][4559] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-183bdb833d' Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.567 [INFO][4559] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.572 [INFO][4559] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.577 [INFO][4559] ipam.go 489: Trying affinity for 192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.580 [INFO][4559] ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.582 [INFO][4559] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.582 [INFO][4559] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.584 [INFO][4559] ipam.go 1685: Creating new handle: k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766 Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.588 [INFO][4559] ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.596 [INFO][4559] ipam.go 1216: Successfully claimed IPs: [192.168.54.65/26] block=192.168.54.64/26 handle="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.596 [INFO][4559] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.65/26] handle="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.596 [INFO][4559] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:43.628404 containerd[1701]: 2024-08-05 21:44:43.596 [INFO][4559] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.54.65/26] IPv6=[] ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" HandleID="k8s-pod-network.00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.631824 containerd[1701]: 2024-08-05 21:44:43.600 [INFO][4552] k8s.go 386: Populated endpoint ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a43c2e97-b54c-4d04-a78d-358682744b6a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"", Pod:"csi-node-driver-grpvp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali980cd03338d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:43.631824 containerd[1701]: 2024-08-05 21:44:43.600 [INFO][4552] k8s.go 387: Calico CNI using IPs: [192.168.54.65/32] ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.631824 containerd[1701]: 2024-08-05 21:44:43.600 [INFO][4552] dataplane_linux.go 68: Setting the host side veth name to cali980cd03338d ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.631824 containerd[1701]: 2024-08-05 21:44:43.603 [INFO][4552] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.631824 containerd[1701]: 2024-08-05 21:44:43.604 [INFO][4552] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a43c2e97-b54c-4d04-a78d-358682744b6a", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766", Pod:"csi-node-driver-grpvp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali980cd03338d", MAC:"c6:bf:8a:88:6f:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:43.631824 containerd[1701]: 2024-08-05 21:44:43.625 [INFO][4552] k8s.go 500: Wrote updated endpoint to datastore ContainerID="00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766" Namespace="calico-system" Pod="csi-node-driver-grpvp" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:44:43.676654 containerd[1701]: time="2024-08-05T21:44:43.676459968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:44:43.676654 containerd[1701]: time="2024-08-05T21:44:43.676553608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:43.676654 containerd[1701]: time="2024-08-05T21:44:43.676572208Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:44:43.676944 containerd[1701]: time="2024-08-05T21:44:43.676600088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:43.703401 systemd[1]: Started cri-containerd-00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766.scope - libcontainer container 00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766. Aug 5 21:44:43.736237 containerd[1701]: time="2024-08-05T21:44:43.736189783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-grpvp,Uid:a43c2e97-b54c-4d04-a78d-358682744b6a,Namespace:calico-system,Attempt:1,} returns sandbox id \"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766\"" Aug 5 21:44:43.739267 containerd[1701]: time="2024-08-05T21:44:43.738728067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 21:44:45.157277 containerd[1701]: time="2024-08-05T21:44:45.157228196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:45.159242 containerd[1701]: time="2024-08-05T21:44:45.159197159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Aug 5 21:44:45.162871 containerd[1701]: time="2024-08-05T21:44:45.162811645Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:45.167483 containerd[1701]: time="2024-08-05T21:44:45.167414092Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:45.168262 containerd[1701]: time="2024-08-05T21:44:45.168133053Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.429356386s" Aug 5 21:44:45.168262 containerd[1701]: time="2024-08-05T21:44:45.168192413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Aug 5 21:44:45.172515 containerd[1701]: time="2024-08-05T21:44:45.172345980Z" level=info msg="CreateContainer within sandbox \"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 21:44:45.218951 containerd[1701]: time="2024-08-05T21:44:45.218900134Z" level=info msg="CreateContainer within sandbox \"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1a221acd83ef9cbf370226e9a9ccec303536659d73bcedfd1a03fcebfb565b91\"" Aug 5 21:44:45.219811 containerd[1701]: time="2024-08-05T21:44:45.219776215Z" level=info msg="StartContainer for \"1a221acd83ef9cbf370226e9a9ccec303536659d73bcedfd1a03fcebfb565b91\"" Aug 5 21:44:45.253460 systemd[1]: Started cri-containerd-1a221acd83ef9cbf370226e9a9ccec303536659d73bcedfd1a03fcebfb565b91.scope - libcontainer container 1a221acd83ef9cbf370226e9a9ccec303536659d73bcedfd1a03fcebfb565b91. Aug 5 21:44:45.271028 containerd[1701]: time="2024-08-05T21:44:45.270823016Z" level=info msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\"" Aug 5 21:44:45.294141 containerd[1701]: time="2024-08-05T21:44:45.293357772Z" level=info msg="StartContainer for \"1a221acd83ef9cbf370226e9a9ccec303536659d73bcedfd1a03fcebfb565b91\" returns successfully" Aug 5 21:44:45.296382 containerd[1701]: time="2024-08-05T21:44:45.296136536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 21:44:45.356502 systemd-networkd[1451]: cali980cd03338d: Gained IPv6LL Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.336 [INFO][4667] k8s.go 608: Cleaning up netns ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.336 [INFO][4667] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" iface="eth0" netns="/var/run/netns/cni-ba57e386-0820-eb85-2df3-91b4a339f824" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.337 [INFO][4667] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" iface="eth0" netns="/var/run/netns/cni-ba57e386-0820-eb85-2df3-91b4a339f824" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.338 [INFO][4667] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" iface="eth0" netns="/var/run/netns/cni-ba57e386-0820-eb85-2df3-91b4a339f824" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.338 [INFO][4667] k8s.go 615: Releasing IP address(es) ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.338 [INFO][4667] utils.go 188: Calico CNI releasing IP address ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.359 [INFO][4676] ipam_plugin.go 411: Releasing address using handleID ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.359 [INFO][4676] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.359 [INFO][4676] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.367 [WARNING][4676] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.367 [INFO][4676] ipam_plugin.go 439: Releasing address using workloadID ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.369 [INFO][4676] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:45.371992 containerd[1701]: 2024-08-05 21:44:45.370 [INFO][4667] k8s.go 621: Teardown processing complete. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:44:45.372653 containerd[1701]: time="2024-08-05T21:44:45.372243377Z" level=info msg="TearDown network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" successfully" Aug 5 21:44:45.372653 containerd[1701]: time="2024-08-05T21:44:45.372271297Z" level=info msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" returns successfully" Aug 5 21:44:45.373821 containerd[1701]: time="2024-08-05T21:44:45.373446899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9f5bd8b5-h2rwc,Uid:055db446-940a-4e53-beae-809dff8b6ada,Namespace:calico-system,Attempt:1,}" Aug 5 21:44:45.375065 systemd[1]: run-netns-cni\x2dba57e386\x2d0820\x2deb85\x2d2df3\x2d91b4a339f824.mount: Deactivated successfully. Aug 5 21:44:45.522696 systemd-networkd[1451]: calib559fb93c57: Link UP Aug 5 21:44:45.523722 systemd-networkd[1451]: calib559fb93c57: Gained carrier Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.452 [INFO][4686] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0 calico-kube-controllers-5f9f5bd8b5- calico-system 055db446-940a-4e53-beae-809dff8b6ada 788 0 2024-08-05 21:43:55 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f9f5bd8b5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.1.0-a-183bdb833d calico-kube-controllers-5f9f5bd8b5-h2rwc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib559fb93c57 [] []}} ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.452 [INFO][4686] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.484 [INFO][4693] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" HandleID="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.495 [INFO][4693] ipam_plugin.go 264: Auto assigning IP ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" HandleID="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000349360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.1.0-a-183bdb833d", "pod":"calico-kube-controllers-5f9f5bd8b5-h2rwc", "timestamp":"2024-08-05 21:44:45.484109514 +0000 UTC"}, Hostname:"ci-4012.1.0-a-183bdb833d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.495 [INFO][4693] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.495 [INFO][4693] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.495 [INFO][4693] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-183bdb833d' Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.496 [INFO][4693] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.500 [INFO][4693] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.504 [INFO][4693] ipam.go 489: Trying affinity for 192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.506 [INFO][4693] ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.508 [INFO][4693] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.508 [INFO][4693] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.509 [INFO][4693] ipam.go 1685: Creating new handle: k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39 Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.512 [INFO][4693] ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.517 [INFO][4693] ipam.go 1216: Successfully claimed IPs: [192.168.54.66/26] block=192.168.54.64/26 handle="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.517 [INFO][4693] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.66/26] handle="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.517 [INFO][4693] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:45.541665 containerd[1701]: 2024-08-05 21:44:45.517 [INFO][4693] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.54.66/26] IPv6=[] ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" HandleID="k8s-pod-network.c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.542959 containerd[1701]: 2024-08-05 21:44:45.519 [INFO][4686] k8s.go 386: Populated endpoint ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0", GenerateName:"calico-kube-controllers-5f9f5bd8b5-", Namespace:"calico-system", SelfLink:"", UID:"055db446-940a-4e53-beae-809dff8b6ada", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9f5bd8b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"", Pod:"calico-kube-controllers-5f9f5bd8b5-h2rwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib559fb93c57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:45.542959 containerd[1701]: 2024-08-05 21:44:45.520 [INFO][4686] k8s.go 387: Calico CNI using IPs: [192.168.54.66/32] ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.542959 containerd[1701]: 2024-08-05 21:44:45.520 [INFO][4686] dataplane_linux.go 68: Setting the host side veth name to calib559fb93c57 ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.542959 containerd[1701]: 2024-08-05 21:44:45.524 [INFO][4686] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.542959 containerd[1701]: 2024-08-05 21:44:45.524 [INFO][4686] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0", GenerateName:"calico-kube-controllers-5f9f5bd8b5-", Namespace:"calico-system", SelfLink:"", UID:"055db446-940a-4e53-beae-809dff8b6ada", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9f5bd8b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39", Pod:"calico-kube-controllers-5f9f5bd8b5-h2rwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib559fb93c57", MAC:"72:29:6b:58:a6:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:45.542959 containerd[1701]: 2024-08-05 21:44:45.539 [INFO][4686] k8s.go 500: Wrote updated endpoint to datastore ContainerID="c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39" Namespace="calico-system" Pod="calico-kube-controllers-5f9f5bd8b5-h2rwc" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:44:45.569620 containerd[1701]: time="2024-08-05T21:44:45.569507890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:44:45.569620 containerd[1701]: time="2024-08-05T21:44:45.569567010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:45.569620 containerd[1701]: time="2024-08-05T21:44:45.569595490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:44:45.570355 containerd[1701]: time="2024-08-05T21:44:45.569846010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:45.602545 systemd[1]: Started cri-containerd-c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39.scope - libcontainer container c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39. Aug 5 21:44:45.636080 containerd[1701]: time="2024-08-05T21:44:45.636026315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f9f5bd8b5-h2rwc,Uid:055db446-940a-4e53-beae-809dff8b6ada,Namespace:calico-system,Attempt:1,} returns sandbox id \"c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39\"" Aug 5 21:44:46.270879 containerd[1701]: time="2024-08-05T21:44:46.270813362Z" level=info msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\"" Aug 5 21:44:46.272352 containerd[1701]: time="2024-08-05T21:44:46.271882563Z" level=info msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\"" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.328 [INFO][4777] k8s.go 608: Cleaning up netns ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.328 [INFO][4777] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" iface="eth0" netns="/var/run/netns/cni-99378135-c3b9-1637-1ac6-fac023ce9e24" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.328 [INFO][4777] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" iface="eth0" netns="/var/run/netns/cni-99378135-c3b9-1637-1ac6-fac023ce9e24" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.328 [INFO][4777] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" iface="eth0" netns="/var/run/netns/cni-99378135-c3b9-1637-1ac6-fac023ce9e24" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.328 [INFO][4777] k8s.go 615: Releasing IP address(es) ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.328 [INFO][4777] utils.go 188: Calico CNI releasing IP address ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.358 [INFO][4790] ipam_plugin.go 411: Releasing address using handleID ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.358 [INFO][4790] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.358 [INFO][4790] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.366 [WARNING][4790] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.366 [INFO][4790] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.367 [INFO][4790] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:46.371944 containerd[1701]: 2024-08-05 21:44:46.369 [INFO][4777] k8s.go 621: Teardown processing complete. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:44:46.371944 containerd[1701]: time="2024-08-05T21:44:46.371920442Z" level=info msg="TearDown network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" successfully" Aug 5 21:44:46.373737 containerd[1701]: time="2024-08-05T21:44:46.371956122Z" level=info msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" returns successfully" Aug 5 21:44:46.373737 containerd[1701]: time="2024-08-05T21:44:46.372716323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h95c8,Uid:c9a0c74c-5165-4eb9-a79b-c4b8e106b10a,Namespace:kube-system,Attempt:1,}" Aug 5 21:44:46.377658 systemd[1]: run-netns-cni\x2d99378135\x2dc3b9\x2d1637\x2d1ac6\x2dfac023ce9e24.mount: Deactivated successfully. Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.331 [INFO][4778] k8s.go 608: Cleaning up netns ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.332 [INFO][4778] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" iface="eth0" netns="/var/run/netns/cni-27869086-0f5b-e430-e995-f1ce37b9e3fb" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.333 [INFO][4778] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" iface="eth0" netns="/var/run/netns/cni-27869086-0f5b-e430-e995-f1ce37b9e3fb" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.334 [INFO][4778] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" iface="eth0" netns="/var/run/netns/cni-27869086-0f5b-e430-e995-f1ce37b9e3fb" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.334 [INFO][4778] k8s.go 615: Releasing IP address(es) ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.334 [INFO][4778] utils.go 188: Calico CNI releasing IP address ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.358 [INFO][4794] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.358 [INFO][4794] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.367 [INFO][4794] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.380 [WARNING][4794] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.380 [INFO][4794] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.381 [INFO][4794] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:46.385411 containerd[1701]: 2024-08-05 21:44:46.383 [INFO][4778] k8s.go 621: Teardown processing complete. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:44:46.386561 containerd[1701]: time="2024-08-05T21:44:46.386521425Z" level=info msg="TearDown network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" successfully" Aug 5 21:44:46.386561 containerd[1701]: time="2024-08-05T21:44:46.386558305Z" level=info msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" returns successfully" Aug 5 21:44:46.388665 systemd[1]: run-netns-cni\x2d27869086\x2d0f5b\x2de430\x2de995\x2df1ce37b9e3fb.mount: Deactivated successfully. Aug 5 21:44:46.389455 containerd[1701]: time="2024-08-05T21:44:46.389023149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj2wt,Uid:2e6ea574-1f1c-4e52-898c-5cca482be0c0,Namespace:kube-system,Attempt:1,}" Aug 5 21:44:46.598879 systemd-networkd[1451]: calic91f90f1f86: Link UP Aug 5 21:44:46.600352 systemd-networkd[1451]: calic91f90f1f86: Gained carrier Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.480 [INFO][4803] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0 coredns-76f75df574- kube-system c9a0c74c-5165-4eb9-a79b-c4b8e106b10a 798 0 2024-08-05 21:43:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.1.0-a-183bdb833d coredns-76f75df574-h95c8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic91f90f1f86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.480 [INFO][4803] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.526 [INFO][4827] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" HandleID="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.547 [INFO][4827] ipam_plugin.go 264: Auto assigning IP ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" HandleID="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003162a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.1.0-a-183bdb833d", "pod":"coredns-76f75df574-h95c8", "timestamp":"2024-08-05 21:44:46.526638767 +0000 UTC"}, Hostname:"ci-4012.1.0-a-183bdb833d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.548 [INFO][4827] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.548 [INFO][4827] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.548 [INFO][4827] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-183bdb833d' Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.551 [INFO][4827] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.565 [INFO][4827] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.573 [INFO][4827] ipam.go 489: Trying affinity for 192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.576 [INFO][4827] ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.579 [INFO][4827] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.579 [INFO][4827] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.581 [INFO][4827] ipam.go 1685: Creating new handle: k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.585 [INFO][4827] ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.592 [INFO][4827] ipam.go 1216: Successfully claimed IPs: [192.168.54.67/26] block=192.168.54.64/26 handle="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.592 [INFO][4827] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.67/26] handle="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.592 [INFO][4827] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:46.621319 containerd[1701]: 2024-08-05 21:44:46.592 [INFO][4827] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.54.67/26] IPv6=[] ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" HandleID="k8s-pod-network.2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.622104 containerd[1701]: 2024-08-05 21:44:46.595 [INFO][4803] k8s.go 386: Populated endpoint ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"", Pod:"coredns-76f75df574-h95c8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91f90f1f86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:46.622104 containerd[1701]: 2024-08-05 21:44:46.596 [INFO][4803] k8s.go 387: Calico CNI using IPs: [192.168.54.67/32] ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.622104 containerd[1701]: 2024-08-05 21:44:46.596 [INFO][4803] dataplane_linux.go 68: Setting the host side veth name to calic91f90f1f86 ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.622104 containerd[1701]: 2024-08-05 21:44:46.599 [INFO][4803] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.622104 containerd[1701]: 2024-08-05 21:44:46.601 [INFO][4803] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e", Pod:"coredns-76f75df574-h95c8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91f90f1f86", MAC:"16:2b:63:e9:ec:fc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:46.622104 containerd[1701]: 2024-08-05 21:44:46.619 [INFO][4803] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e" Namespace="kube-system" Pod="coredns-76f75df574-h95c8" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:44:46.689181 containerd[1701]: time="2024-08-05T21:44:46.682849535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:44:46.689181 containerd[1701]: time="2024-08-05T21:44:46.682946935Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:46.689181 containerd[1701]: time="2024-08-05T21:44:46.683088375Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:44:46.689181 containerd[1701]: time="2024-08-05T21:44:46.683102975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:46.702384 systemd-networkd[1451]: calib30b7817879: Link UP Aug 5 21:44:46.703635 systemd-networkd[1451]: calib30b7817879: Gained carrier Aug 5 21:44:46.721016 systemd[1]: Started cri-containerd-2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e.scope - libcontainer container 2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e. Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.496 [INFO][4811] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0 coredns-76f75df574- kube-system 2e6ea574-1f1c-4e52-898c-5cca482be0c0 799 0 2024-08-05 21:43:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.1.0-a-183bdb833d coredns-76f75df574-gj2wt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib30b7817879 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.496 [INFO][4811] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.550 [INFO][4832] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" HandleID="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.566 [INFO][4832] ipam_plugin.go 264: Auto assigning IP ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" HandleID="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002edc90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.1.0-a-183bdb833d", "pod":"coredns-76f75df574-gj2wt", "timestamp":"2024-08-05 21:44:46.550428285 +0000 UTC"}, Hostname:"ci-4012.1.0-a-183bdb833d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.566 [INFO][4832] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.592 [INFO][4832] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.593 [INFO][4832] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-183bdb833d' Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.596 [INFO][4832] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.609 [INFO][4832] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.624 [INFO][4832] ipam.go 489: Trying affinity for 192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.627 [INFO][4832] ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.632 [INFO][4832] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.632 [INFO][4832] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.634 [INFO][4832] ipam.go 1685: Creating new handle: k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202 Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.641 [INFO][4832] ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.660 [INFO][4832] ipam.go 1216: Successfully claimed IPs: [192.168.54.68/26] block=192.168.54.64/26 handle="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.660 [INFO][4832] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.68/26] handle="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.660 [INFO][4832] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:44:46.733844 containerd[1701]: 2024-08-05 21:44:46.660 [INFO][4832] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.54.68/26] IPv6=[] ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" HandleID="k8s-pod-network.425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.734521 containerd[1701]: 2024-08-05 21:44:46.673 [INFO][4811] k8s.go 386: Populated endpoint ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e6ea574-1f1c-4e52-898c-5cca482be0c0", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"", Pod:"coredns-76f75df574-gj2wt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib30b7817879", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:46.734521 containerd[1701]: 2024-08-05 21:44:46.673 [INFO][4811] k8s.go 387: Calico CNI using IPs: [192.168.54.68/32] ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.734521 containerd[1701]: 2024-08-05 21:44:46.673 [INFO][4811] dataplane_linux.go 68: Setting the host side veth name to calib30b7817879 ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.734521 containerd[1701]: 2024-08-05 21:44:46.704 [INFO][4811] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.734521 containerd[1701]: 2024-08-05 21:44:46.704 [INFO][4811] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e6ea574-1f1c-4e52-898c-5cca482be0c0", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202", Pod:"coredns-76f75df574-gj2wt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib30b7817879", MAC:"72:12:8f:9a:d1:f4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:44:46.734521 containerd[1701]: 2024-08-05 21:44:46.722 [INFO][4811] k8s.go 500: Wrote updated endpoint to datastore ContainerID="425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202" Namespace="kube-system" Pod="coredns-76f75df574-gj2wt" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:44:46.789474 containerd[1701]: time="2024-08-05T21:44:46.789426984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-h95c8,Uid:c9a0c74c-5165-4eb9-a79b-c4b8e106b10a,Namespace:kube-system,Attempt:1,} returns sandbox id \"2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e\"" Aug 5 21:44:46.791351 containerd[1701]: time="2024-08-05T21:44:46.791115907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:44:46.791351 containerd[1701]: time="2024-08-05T21:44:46.791212427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:46.791351 containerd[1701]: time="2024-08-05T21:44:46.791242787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:44:46.791351 containerd[1701]: time="2024-08-05T21:44:46.791258067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:44:46.798719 containerd[1701]: time="2024-08-05T21:44:46.798526318Z" level=info msg="CreateContainer within sandbox \"2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:44:46.820531 systemd[1]: Started cri-containerd-425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202.scope - libcontainer container 425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202. Aug 5 21:44:46.852211 containerd[1701]: time="2024-08-05T21:44:46.852063123Z" level=info msg="CreateContainer within sandbox \"2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f420a2d3fe067580b47a7eaa124353d46a727ade948718721411544e9bda8071\"" Aug 5 21:44:46.857369 containerd[1701]: time="2024-08-05T21:44:46.856421290Z" level=info msg="StartContainer for \"f420a2d3fe067580b47a7eaa124353d46a727ade948718721411544e9bda8071\"" Aug 5 21:44:46.869751 containerd[1701]: time="2024-08-05T21:44:46.869675631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-gj2wt,Uid:2e6ea574-1f1c-4e52-898c-5cca482be0c0,Namespace:kube-system,Attempt:1,} returns sandbox id \"425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202\"" Aug 5 21:44:46.877309 containerd[1701]: time="2024-08-05T21:44:46.877250843Z" level=info msg="CreateContainer within sandbox \"425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 21:44:46.900392 systemd[1]: Started cri-containerd-f420a2d3fe067580b47a7eaa124353d46a727ade948718721411544e9bda8071.scope - libcontainer container f420a2d3fe067580b47a7eaa124353d46a727ade948718721411544e9bda8071. Aug 5 21:44:46.923600 containerd[1701]: time="2024-08-05T21:44:46.923211356Z" level=info msg="CreateContainer within sandbox \"425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6f23dc7cc7ca22cfa0622e7c273939fdf28f321f6d1d8544c1f05415ad7604b5\"" Aug 5 21:44:46.924123 containerd[1701]: time="2024-08-05T21:44:46.924086277Z" level=info msg="StartContainer for \"6f23dc7cc7ca22cfa0622e7c273939fdf28f321f6d1d8544c1f05415ad7604b5\"" Aug 5 21:44:46.971435 systemd[1]: Started cri-containerd-6f23dc7cc7ca22cfa0622e7c273939fdf28f321f6d1d8544c1f05415ad7604b5.scope - libcontainer container 6f23dc7cc7ca22cfa0622e7c273939fdf28f321f6d1d8544c1f05415ad7604b5. Aug 5 21:44:46.992533 containerd[1701]: time="2024-08-05T21:44:46.992470626Z" level=info msg="StartContainer for \"f420a2d3fe067580b47a7eaa124353d46a727ade948718721411544e9bda8071\" returns successfully" Aug 5 21:44:47.019375 containerd[1701]: time="2024-08-05T21:44:47.019092108Z" level=info msg="StartContainer for \"6f23dc7cc7ca22cfa0622e7c273939fdf28f321f6d1d8544c1f05415ad7604b5\" returns successfully" Aug 5 21:44:47.021454 containerd[1701]: time="2024-08-05T21:44:47.021313952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:47.025564 containerd[1701]: time="2024-08-05T21:44:47.025502038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Aug 5 21:44:47.030324 containerd[1701]: time="2024-08-05T21:44:47.030225646Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:47.035962 containerd[1701]: time="2024-08-05T21:44:47.035097853Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:47.035962 containerd[1701]: time="2024-08-05T21:44:47.035812655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.739615039s" Aug 5 21:44:47.035962 containerd[1701]: time="2024-08-05T21:44:47.035849175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Aug 5 21:44:47.036766 containerd[1701]: time="2024-08-05T21:44:47.036334895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 21:44:47.039989 containerd[1701]: time="2024-08-05T21:44:47.039727821Z" level=info msg="CreateContainer within sandbox \"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 21:44:47.091121 containerd[1701]: time="2024-08-05T21:44:47.091069182Z" level=info msg="CreateContainer within sandbox \"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"087aab99b2008661ec14ca4a068bf1a327be83b384229212437362b9483ee264\"" Aug 5 21:44:47.092881 containerd[1701]: time="2024-08-05T21:44:47.092587505Z" level=info msg="StartContainer for \"087aab99b2008661ec14ca4a068bf1a327be83b384229212437362b9483ee264\"" Aug 5 21:44:47.117370 systemd[1]: Started cri-containerd-087aab99b2008661ec14ca4a068bf1a327be83b384229212437362b9483ee264.scope - libcontainer container 087aab99b2008661ec14ca4a068bf1a327be83b384229212437362b9483ee264. Aug 5 21:44:47.149621 containerd[1701]: time="2024-08-05T21:44:47.149555475Z" level=info msg="StartContainer for \"087aab99b2008661ec14ca4a068bf1a327be83b384229212437362b9483ee264\" returns successfully" Aug 5 21:44:47.406721 kubelet[3211]: I0805 21:44:47.406628 3211 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 21:44:47.406721 kubelet[3211]: I0805 21:44:47.406661 3211 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 21:44:47.532394 systemd-networkd[1451]: calib559fb93c57: Gained IPv6LL Aug 5 21:44:47.582105 kubelet[3211]: I0805 21:44:47.581909 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-grpvp" podStartSLOduration=49.284049771 podStartE2EDuration="52.5818652s" podCreationTimestamp="2024-08-05 21:43:55 +0000 UTC" firstStartedPulling="2024-08-05 21:44:43.738357146 +0000 UTC m=+70.583348828" lastFinishedPulling="2024-08-05 21:44:47.036172575 +0000 UTC m=+73.881164257" observedRunningTime="2024-08-05 21:44:47.58137936 +0000 UTC m=+74.426371082" watchObservedRunningTime="2024-08-05 21:44:47.5818652 +0000 UTC m=+74.426856922" Aug 5 21:44:47.582736 kubelet[3211]: I0805 21:44:47.582656 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-gj2wt" podStartSLOduration=60.582616962 podStartE2EDuration="1m0.582616962s" podCreationTimestamp="2024-08-05 21:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:44:47.560247926 +0000 UTC m=+74.405239648" watchObservedRunningTime="2024-08-05 21:44:47.582616962 +0000 UTC m=+74.427608684" Aug 5 21:44:48.044348 systemd-networkd[1451]: calib30b7817879: Gained IPv6LL Aug 5 21:44:48.492403 systemd-networkd[1451]: calic91f90f1f86: Gained IPv6LL Aug 5 21:44:50.023655 containerd[1701]: time="2024-08-05T21:44:50.023573395Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:50.025429 containerd[1701]: time="2024-08-05T21:44:50.025359038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Aug 5 21:44:50.029545 containerd[1701]: time="2024-08-05T21:44:50.029502126Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:50.034518 containerd[1701]: time="2024-08-05T21:44:50.034440735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:44:50.035404 containerd[1701]: time="2024-08-05T21:44:50.035200057Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 2.998833802s" Aug 5 21:44:50.035404 containerd[1701]: time="2024-08-05T21:44:50.035267497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Aug 5 21:44:50.056480 containerd[1701]: time="2024-08-05T21:44:50.056301537Z" level=info msg="CreateContainer within sandbox \"c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 21:44:50.091714 containerd[1701]: time="2024-08-05T21:44:50.091666364Z" level=info msg="CreateContainer within sandbox \"c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"03fba76ee9f46046ed24e4552dd928a1929d77b82faf47fe8cdeb00b60a84f49\"" Aug 5 21:44:50.093570 containerd[1701]: time="2024-08-05T21:44:50.092340405Z" level=info msg="StartContainer for \"03fba76ee9f46046ed24e4552dd928a1929d77b82faf47fe8cdeb00b60a84f49\"" Aug 5 21:44:50.122388 systemd[1]: Started cri-containerd-03fba76ee9f46046ed24e4552dd928a1929d77b82faf47fe8cdeb00b60a84f49.scope - libcontainer container 03fba76ee9f46046ed24e4552dd928a1929d77b82faf47fe8cdeb00b60a84f49. Aug 5 21:44:50.164329 containerd[1701]: time="2024-08-05T21:44:50.163616181Z" level=info msg="StartContainer for \"03fba76ee9f46046ed24e4552dd928a1929d77b82faf47fe8cdeb00b60a84f49\" returns successfully" Aug 5 21:44:50.578204 kubelet[3211]: I0805 21:44:50.576998 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f9f5bd8b5-h2rwc" podStartSLOduration=51.178790066 podStartE2EDuration="55.576952166s" podCreationTimestamp="2024-08-05 21:43:55 +0000 UTC" firstStartedPulling="2024-08-05 21:44:45.637359077 +0000 UTC m=+72.482350759" lastFinishedPulling="2024-08-05 21:44:50.035521137 +0000 UTC m=+76.880512859" observedRunningTime="2024-08-05 21:44:50.574594841 +0000 UTC m=+77.419586563" watchObservedRunningTime="2024-08-05 21:44:50.576952166 +0000 UTC m=+77.421943888" Aug 5 21:44:50.578204 kubelet[3211]: I0805 21:44:50.577249 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-h95c8" podStartSLOduration=63.577230166 podStartE2EDuration="1m3.577230166s" podCreationTimestamp="2024-08-05 21:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 21:44:47.617823897 +0000 UTC m=+74.462815619" watchObservedRunningTime="2024-08-05 21:44:50.577230166 +0000 UTC m=+77.422221888" Aug 5 21:45:04.518327 kubelet[3211]: I0805 21:45:04.517330 3211 topology_manager.go:215] "Topology Admit Handler" podUID="e76b8de3-c3fd-482c-9aae-93b834c32ddd" podNamespace="calico-apiserver" podName="calico-apiserver-7d7f6bd85f-5llxg" Aug 5 21:45:04.531277 systemd[1]: Created slice kubepods-besteffort-pode76b8de3_c3fd_482c_9aae_93b834c32ddd.slice - libcontainer container kubepods-besteffort-pode76b8de3_c3fd_482c_9aae_93b834c32ddd.slice. Aug 5 21:45:04.536222 kubelet[3211]: I0805 21:45:04.536147 3211 topology_manager.go:215] "Topology Admit Handler" podUID="ea1e1fc7-569a-484f-a8f0-d107e07baa1a" podNamespace="calico-apiserver" podName="calico-apiserver-7d7f6bd85f-6vzsj" Aug 5 21:45:04.545103 systemd[1]: Created slice kubepods-besteffort-podea1e1fc7_569a_484f_a8f0_d107e07baa1a.slice - libcontainer container kubepods-besteffort-podea1e1fc7_569a_484f_a8f0_d107e07baa1a.slice. Aug 5 21:45:04.565792 kubelet[3211]: I0805 21:45:04.565759 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfcc5\" (UniqueName: \"kubernetes.io/projected/ea1e1fc7-569a-484f-a8f0-d107e07baa1a-kube-api-access-hfcc5\") pod \"calico-apiserver-7d7f6bd85f-6vzsj\" (UID: \"ea1e1fc7-569a-484f-a8f0-d107e07baa1a\") " pod="calico-apiserver/calico-apiserver-7d7f6bd85f-6vzsj" Aug 5 21:45:04.566288 kubelet[3211]: I0805 21:45:04.566254 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e76b8de3-c3fd-482c-9aae-93b834c32ddd-calico-apiserver-certs\") pod \"calico-apiserver-7d7f6bd85f-5llxg\" (UID: \"e76b8de3-c3fd-482c-9aae-93b834c32ddd\") " pod="calico-apiserver/calico-apiserver-7d7f6bd85f-5llxg" Aug 5 21:45:04.566453 kubelet[3211]: I0805 21:45:04.566342 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-687vc\" (UniqueName: \"kubernetes.io/projected/e76b8de3-c3fd-482c-9aae-93b834c32ddd-kube-api-access-687vc\") pod \"calico-apiserver-7d7f6bd85f-5llxg\" (UID: \"e76b8de3-c3fd-482c-9aae-93b834c32ddd\") " pod="calico-apiserver/calico-apiserver-7d7f6bd85f-5llxg" Aug 5 21:45:04.566453 kubelet[3211]: I0805 21:45:04.566392 3211 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ea1e1fc7-569a-484f-a8f0-d107e07baa1a-calico-apiserver-certs\") pod \"calico-apiserver-7d7f6bd85f-6vzsj\" (UID: \"ea1e1fc7-569a-484f-a8f0-d107e07baa1a\") " pod="calico-apiserver/calico-apiserver-7d7f6bd85f-6vzsj" Aug 5 21:45:04.669460 kubelet[3211]: E0805 21:45:04.669305 3211 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 21:45:04.669460 kubelet[3211]: E0805 21:45:04.669435 3211 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e76b8de3-c3fd-482c-9aae-93b834c32ddd-calico-apiserver-certs podName:e76b8de3-c3fd-482c-9aae-93b834c32ddd nodeName:}" failed. No retries permitted until 2024-08-05 21:45:05.169404928 +0000 UTC m=+92.014396650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/e76b8de3-c3fd-482c-9aae-93b834c32ddd-calico-apiserver-certs") pod "calico-apiserver-7d7f6bd85f-5llxg" (UID: "e76b8de3-c3fd-482c-9aae-93b834c32ddd") : secret "calico-apiserver-certs" not found Aug 5 21:45:04.669894 kubelet[3211]: E0805 21:45:04.669305 3211 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 21:45:04.669894 kubelet[3211]: E0805 21:45:04.669594 3211 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ea1e1fc7-569a-484f-a8f0-d107e07baa1a-calico-apiserver-certs podName:ea1e1fc7-569a-484f-a8f0-d107e07baa1a nodeName:}" failed. No retries permitted until 2024-08-05 21:45:05.169580568 +0000 UTC m=+92.014572290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/ea1e1fc7-569a-484f-a8f0-d107e07baa1a-calico-apiserver-certs") pod "calico-apiserver-7d7f6bd85f-6vzsj" (UID: "ea1e1fc7-569a-484f-a8f0-d107e07baa1a") : secret "calico-apiserver-certs" not found Aug 5 21:45:05.439669 containerd[1701]: time="2024-08-05T21:45:05.439611306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7f6bd85f-5llxg,Uid:e76b8de3-c3fd-482c-9aae-93b834c32ddd,Namespace:calico-apiserver,Attempt:0,}" Aug 5 21:45:05.450185 containerd[1701]: time="2024-08-05T21:45:05.449983844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7f6bd85f-6vzsj,Uid:ea1e1fc7-569a-484f-a8f0-d107e07baa1a,Namespace:calico-apiserver,Attempt:0,}" Aug 5 21:45:05.664646 systemd-networkd[1451]: cali7659081d58c: Link UP Aug 5 21:45:05.667414 systemd-networkd[1451]: cali7659081d58c: Gained carrier Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.547 [INFO][5201] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0 calico-apiserver-7d7f6bd85f- calico-apiserver e76b8de3-c3fd-482c-9aae-93b834c32ddd 924 0 2024-08-05 21:45:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d7f6bd85f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.1.0-a-183bdb833d calico-apiserver-7d7f6bd85f-5llxg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7659081d58c [] []}} ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.548 [INFO][5201] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.600 [INFO][5224] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" HandleID="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.614 [INFO][5224] ipam_plugin.go 264: Auto assigning IP ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" HandleID="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001169c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.1.0-a-183bdb833d", "pod":"calico-apiserver-7d7f6bd85f-5llxg", "timestamp":"2024-08-05 21:45:05.600828714 +0000 UTC"}, Hostname:"ci-4012.1.0-a-183bdb833d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.614 [INFO][5224] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.614 [INFO][5224] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.614 [INFO][5224] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-183bdb833d' Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.617 [INFO][5224] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.622 [INFO][5224] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.628 [INFO][5224] ipam.go 489: Trying affinity for 192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.631 [INFO][5224] ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.635 [INFO][5224] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.635 [INFO][5224] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.638 [INFO][5224] ipam.go 1685: Creating new handle: k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083 Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.643 [INFO][5224] ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.653 [INFO][5224] ipam.go 1216: Successfully claimed IPs: [192.168.54.69/26] block=192.168.54.64/26 handle="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.653 [INFO][5224] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.69/26] handle="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.654 [INFO][5224] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:05.686468 containerd[1701]: 2024-08-05 21:45:05.654 [INFO][5224] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.54.69/26] IPv6=[] ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" HandleID="k8s-pod-network.2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" Aug 5 21:45:05.686993 containerd[1701]: 2024-08-05 21:45:05.657 [INFO][5201] k8s.go 386: Populated endpoint ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0", GenerateName:"calico-apiserver-7d7f6bd85f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e76b8de3-c3fd-482c-9aae-93b834c32ddd", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7f6bd85f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"", Pod:"calico-apiserver-7d7f6bd85f-5llxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7659081d58c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:05.686993 containerd[1701]: 2024-08-05 21:45:05.658 [INFO][5201] k8s.go 387: Calico CNI using IPs: [192.168.54.69/32] ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" Aug 5 21:45:05.686993 containerd[1701]: 2024-08-05 21:45:05.658 [INFO][5201] dataplane_linux.go 68: Setting the host side veth name to cali7659081d58c ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" Aug 5 21:45:05.686993 containerd[1701]: 2024-08-05 21:45:05.664 [INFO][5201] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" Aug 5 21:45:05.686993 containerd[1701]: 2024-08-05 21:45:05.667 [INFO][5201] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0", GenerateName:"calico-apiserver-7d7f6bd85f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e76b8de3-c3fd-482c-9aae-93b834c32ddd", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7f6bd85f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083", Pod:"calico-apiserver-7d7f6bd85f-5llxg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7659081d58c", MAC:"5e:21:61:b3:01:90", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:05.686993 containerd[1701]: 2024-08-05 21:45:05.677 [INFO][5201] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-5llxg" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--5llxg-eth0" Aug 5 21:45:05.734545 containerd[1701]: time="2024-08-05T21:45:05.732060429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:45:05.734545 containerd[1701]: time="2024-08-05T21:45:05.732264469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:45:05.734545 containerd[1701]: time="2024-08-05T21:45:05.732349669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:45:05.734930 containerd[1701]: time="2024-08-05T21:45:05.732387629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:45:05.746802 systemd-networkd[1451]: calic903fd70f50: Link UP Aug 5 21:45:05.748913 systemd-networkd[1451]: calic903fd70f50: Gained carrier Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.571 [INFO][5205] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0 calico-apiserver-7d7f6bd85f- calico-apiserver ea1e1fc7-569a-484f-a8f0-d107e07baa1a 928 0 2024-08-05 21:45:04 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d7f6bd85f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.1.0-a-183bdb833d calico-apiserver-7d7f6bd85f-6vzsj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic903fd70f50 [] []}} ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.572 [INFO][5205] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.635 [INFO][5231] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" HandleID="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.649 [INFO][5231] ipam_plugin.go 264: Auto assigning IP ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" HandleID="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001020f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.1.0-a-183bdb833d", "pod":"calico-apiserver-7d7f6bd85f-6vzsj", "timestamp":"2024-08-05 21:45:05.635017655 +0000 UTC"}, Hostname:"ci-4012.1.0-a-183bdb833d", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.650 [INFO][5231] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.654 [INFO][5231] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.654 [INFO][5231] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-183bdb833d' Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.657 [INFO][5231] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.670 [INFO][5231] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.696 [INFO][5231] ipam.go 489: Trying affinity for 192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.702 [INFO][5231] ipam.go 155: Attempting to load block cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.708 [INFO][5231] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.64/26 host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.708 [INFO][5231] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.64/26 handle="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.710 [INFO][5231] ipam.go 1685: Creating new handle: k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.718 [INFO][5231] ipam.go 1203: Writing block in order to claim IPs block=192.168.54.64/26 handle="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.731 [INFO][5231] ipam.go 1216: Successfully claimed IPs: [192.168.54.70/26] block=192.168.54.64/26 handle="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.731 [INFO][5231] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.70/26] handle="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" host="ci-4012.1.0-a-183bdb833d" Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.731 [INFO][5231] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:05.765265 containerd[1701]: 2024-08-05 21:45:05.731 [INFO][5231] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.54.70/26] IPv6=[] ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" HandleID="k8s-pod-network.aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" Aug 5 21:45:05.766839 containerd[1701]: 2024-08-05 21:45:05.735 [INFO][5205] k8s.go 386: Populated endpoint ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0", GenerateName:"calico-apiserver-7d7f6bd85f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea1e1fc7-569a-484f-a8f0-d107e07baa1a", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7f6bd85f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"", Pod:"calico-apiserver-7d7f6bd85f-6vzsj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic903fd70f50", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:05.766839 containerd[1701]: 2024-08-05 21:45:05.736 [INFO][5205] k8s.go 387: Calico CNI using IPs: [192.168.54.70/32] ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" Aug 5 21:45:05.766839 containerd[1701]: 2024-08-05 21:45:05.736 [INFO][5205] dataplane_linux.go 68: Setting the host side veth name to calic903fd70f50 ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" Aug 5 21:45:05.766839 containerd[1701]: 2024-08-05 21:45:05.748 [INFO][5205] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" Aug 5 21:45:05.766839 containerd[1701]: 2024-08-05 21:45:05.749 [INFO][5205] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0", GenerateName:"calico-apiserver-7d7f6bd85f-", Namespace:"calico-apiserver", SelfLink:"", UID:"ea1e1fc7-569a-484f-a8f0-d107e07baa1a", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 45, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d7f6bd85f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d", Pod:"calico-apiserver-7d7f6bd85f-6vzsj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.54.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic903fd70f50", MAC:"d2:7b:c5:ce:95:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:05.766839 containerd[1701]: 2024-08-05 21:45:05.759 [INFO][5205] k8s.go 500: Wrote updated endpoint to datastore ContainerID="aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d" Namespace="calico-apiserver" Pod="calico-apiserver-7d7f6bd85f-6vzsj" WorkloadEndpoint="ci--4012.1.0--a--183bdb833d-k8s-calico--apiserver--7d7f6bd85f--6vzsj-eth0" Aug 5 21:45:05.791978 systemd[1]: Started cri-containerd-2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083.scope - libcontainer container 2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083. Aug 5 21:45:05.818626 containerd[1701]: time="2024-08-05T21:45:05.818074543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 21:45:05.818626 containerd[1701]: time="2024-08-05T21:45:05.818139423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:45:05.818626 containerd[1701]: time="2024-08-05T21:45:05.818167983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 21:45:05.818626 containerd[1701]: time="2024-08-05T21:45:05.818181943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 21:45:05.850832 systemd[1]: Started cri-containerd-aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d.scope - libcontainer container aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d. Aug 5 21:45:05.888012 containerd[1701]: time="2024-08-05T21:45:05.887969988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7f6bd85f-5llxg,Uid:e76b8de3-c3fd-482c-9aae-93b834c32ddd,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083\"" Aug 5 21:45:05.899791 containerd[1701]: time="2024-08-05T21:45:05.899693488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d7f6bd85f-6vzsj,Uid:ea1e1fc7-569a-484f-a8f0-d107e07baa1a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d\"" Aug 5 21:45:05.902514 containerd[1701]: time="2024-08-05T21:45:05.902327333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 21:45:06.988507 systemd-networkd[1451]: cali7659081d58c: Gained IPv6LL Aug 5 21:45:07.756452 systemd-networkd[1451]: calic903fd70f50: Gained IPv6LL Aug 5 21:45:09.671116 containerd[1701]: time="2024-08-05T21:45:09.671052470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:45:09.673140 containerd[1701]: time="2024-08-05T21:45:09.673074073Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Aug 5 21:45:09.677148 containerd[1701]: time="2024-08-05T21:45:09.677085920Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:45:09.683898 containerd[1701]: time="2024-08-05T21:45:09.683798292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:45:09.684710 containerd[1701]: time="2024-08-05T21:45:09.684666533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.78229516s" Aug 5 21:45:09.684813 containerd[1701]: time="2024-08-05T21:45:09.684796814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Aug 5 21:45:09.685444 containerd[1701]: time="2024-08-05T21:45:09.685408935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 21:45:09.687918 containerd[1701]: time="2024-08-05T21:45:09.687846619Z" level=info msg="CreateContainer within sandbox \"2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 21:45:09.729117 containerd[1701]: time="2024-08-05T21:45:09.729068491Z" level=info msg="CreateContainer within sandbox \"2890eb27ebc8a37513409ab95b15c1bbf7c87fc27d923a50d04d5432dc457083\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6506105e98d403ff3fac9928c3b0d9d426f4305bdbd40a0f6c0d6ee82b86e4a2\"" Aug 5 21:45:09.730200 containerd[1701]: time="2024-08-05T21:45:09.729992172Z" level=info msg="StartContainer for \"6506105e98d403ff3fac9928c3b0d9d426f4305bdbd40a0f6c0d6ee82b86e4a2\"" Aug 5 21:45:09.770372 systemd[1]: Started cri-containerd-6506105e98d403ff3fac9928c3b0d9d426f4305bdbd40a0f6c0d6ee82b86e4a2.scope - libcontainer container 6506105e98d403ff3fac9928c3b0d9d426f4305bdbd40a0f6c0d6ee82b86e4a2. Aug 5 21:45:09.806900 containerd[1701]: time="2024-08-05T21:45:09.806845306Z" level=info msg="StartContainer for \"6506105e98d403ff3fac9928c3b0d9d426f4305bdbd40a0f6c0d6ee82b86e4a2\" returns successfully" Aug 5 21:45:10.003346 containerd[1701]: time="2024-08-05T21:45:10.003167447Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 21:45:10.006864 containerd[1701]: time="2024-08-05T21:45:10.006811974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Aug 5 21:45:10.012438 containerd[1701]: time="2024-08-05T21:45:10.012334943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 326.884688ms" Aug 5 21:45:10.012438 containerd[1701]: time="2024-08-05T21:45:10.012416423Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Aug 5 21:45:10.017749 containerd[1701]: time="2024-08-05T21:45:10.017654713Z" level=info msg="CreateContainer within sandbox \"aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 21:45:10.060330 containerd[1701]: time="2024-08-05T21:45:10.060277987Z" level=info msg="CreateContainer within sandbox \"aec3ec4a144a1de9563dbb5b8484df55fe40de87367111f71d7133ed1b8e827d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"55a6afb0eb6a11abcdd93a229d6ad2c74ef77760c4cb9207cfb372d8bd392c37\"" Aug 5 21:45:10.061414 containerd[1701]: time="2024-08-05T21:45:10.061354789Z" level=info msg="StartContainer for \"55a6afb0eb6a11abcdd93a229d6ad2c74ef77760c4cb9207cfb372d8bd392c37\"" Aug 5 21:45:10.102377 systemd[1]: Started cri-containerd-55a6afb0eb6a11abcdd93a229d6ad2c74ef77760c4cb9207cfb372d8bd392c37.scope - libcontainer container 55a6afb0eb6a11abcdd93a229d6ad2c74ef77760c4cb9207cfb372d8bd392c37. Aug 5 21:45:10.154588 containerd[1701]: time="2024-08-05T21:45:10.154529231Z" level=info msg="StartContainer for \"55a6afb0eb6a11abcdd93a229d6ad2c74ef77760c4cb9207cfb372d8bd392c37\" returns successfully" Aug 5 21:45:10.655392 kubelet[3211]: I0805 21:45:10.655339 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d7f6bd85f-5llxg" podStartSLOduration=2.869287856 podStartE2EDuration="6.655290022s" podCreationTimestamp="2024-08-05 21:45:04 +0000 UTC" firstStartedPulling="2024-08-05 21:45:05.899291768 +0000 UTC m=+92.744283490" lastFinishedPulling="2024-08-05 21:45:09.685293934 +0000 UTC m=+96.530285656" observedRunningTime="2024-08-05 21:45:10.641375158 +0000 UTC m=+97.486366880" watchObservedRunningTime="2024-08-05 21:45:10.655290022 +0000 UTC m=+97.500281744" Aug 5 21:45:10.717463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3773326886.mount: Deactivated successfully. Aug 5 21:45:10.864061 kubelet[3211]: I0805 21:45:10.864017 3211 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d7f6bd85f-6vzsj" podStartSLOduration=2.755164777 podStartE2EDuration="6.863969665s" podCreationTimestamp="2024-08-05 21:45:04 +0000 UTC" firstStartedPulling="2024-08-05 21:45:05.903868376 +0000 UTC m=+92.748860098" lastFinishedPulling="2024-08-05 21:45:10.012673264 +0000 UTC m=+96.857664986" observedRunningTime="2024-08-05 21:45:10.655683223 +0000 UTC m=+97.500674945" watchObservedRunningTime="2024-08-05 21:45:10.863969665 +0000 UTC m=+97.708961387" Aug 5 21:45:33.831132 containerd[1701]: time="2024-08-05T21:45:33.831064470Z" level=info msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\"" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.865 [WARNING][5545] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e", Pod:"coredns-76f75df574-h95c8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91f90f1f86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.865 [INFO][5545] k8s.go 608: Cleaning up netns ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.865 [INFO][5545] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" iface="eth0" netns="" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.865 [INFO][5545] k8s.go 615: Releasing IP address(es) ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.865 [INFO][5545] utils.go 188: Calico CNI releasing IP address ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.884 [INFO][5551] ipam_plugin.go 411: Releasing address using handleID ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.884 [INFO][5551] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.884 [INFO][5551] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.891 [WARNING][5551] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.892 [INFO][5551] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.893 [INFO][5551] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:33.896079 containerd[1701]: 2024-08-05 21:45:33.894 [INFO][5545] k8s.go 621: Teardown processing complete. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.896510 containerd[1701]: time="2024-08-05T21:45:33.896130583Z" level=info msg="TearDown network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" successfully" Aug 5 21:45:33.896510 containerd[1701]: time="2024-08-05T21:45:33.896187823Z" level=info msg="StopPodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" returns successfully" Aug 5 21:45:33.896938 containerd[1701]: time="2024-08-05T21:45:33.896902145Z" level=info msg="RemovePodSandbox for \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\"" Aug 5 21:45:33.897003 containerd[1701]: time="2024-08-05T21:45:33.896943145Z" level=info msg="Forcibly stopping sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\"" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.929 [WARNING][5569] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"c9a0c74c-5165-4eb9-a79b-c4b8e106b10a", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"2e0fc5b1066d6d3b35bdcb1f898dc64ef7ea7f24df16e15c5971ee636b320a5e", Pod:"coredns-76f75df574-h95c8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic91f90f1f86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.929 [INFO][5569] k8s.go 608: Cleaning up netns ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.929 [INFO][5569] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" iface="eth0" netns="" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.929 [INFO][5569] k8s.go 615: Releasing IP address(es) ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.929 [INFO][5569] utils.go 188: Calico CNI releasing IP address ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.947 [INFO][5575] ipam_plugin.go 411: Releasing address using handleID ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.947 [INFO][5575] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.947 [INFO][5575] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.955 [WARNING][5575] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.955 [INFO][5575] ipam_plugin.go 439: Releasing address using workloadID ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" HandleID="k8s-pod-network.a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--h95c8-eth0" Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.956 [INFO][5575] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:33.959464 containerd[1701]: 2024-08-05 21:45:33.958 [INFO][5569] k8s.go 621: Teardown processing complete. ContainerID="a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505" Aug 5 21:45:33.959875 containerd[1701]: time="2024-08-05T21:45:33.959503333Z" level=info msg="TearDown network for sandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" successfully" Aug 5 21:45:33.969695 containerd[1701]: time="2024-08-05T21:45:33.969595311Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:45:33.969844 containerd[1701]: time="2024-08-05T21:45:33.969733791Z" level=info msg="RemovePodSandbox \"a6c57fcfed1deaa83e09d514d257b9aceb1c3d4d92faa6c3ef82c19d835f6505\" returns successfully" Aug 5 21:45:33.970401 containerd[1701]: time="2024-08-05T21:45:33.970365432Z" level=info msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\"" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.007 [WARNING][5593] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e6ea574-1f1c-4e52-898c-5cca482be0c0", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202", Pod:"coredns-76f75df574-gj2wt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib30b7817879", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.007 [INFO][5593] k8s.go 608: Cleaning up netns ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.007 [INFO][5593] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" iface="eth0" netns="" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.007 [INFO][5593] k8s.go 615: Releasing IP address(es) ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.007 [INFO][5593] utils.go 188: Calico CNI releasing IP address ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.027 [INFO][5599] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.028 [INFO][5599] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.028 [INFO][5599] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.037 [WARNING][5599] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.037 [INFO][5599] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.039 [INFO][5599] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:34.041770 containerd[1701]: 2024-08-05 21:45:34.040 [INFO][5593] k8s.go 621: Teardown processing complete. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.042512 containerd[1701]: time="2024-08-05T21:45:34.041815196Z" level=info msg="TearDown network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" successfully" Aug 5 21:45:34.042512 containerd[1701]: time="2024-08-05T21:45:34.041846356Z" level=info msg="StopPodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" returns successfully" Aug 5 21:45:34.042512 containerd[1701]: time="2024-08-05T21:45:34.042324077Z" level=info msg="RemovePodSandbox for \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\"" Aug 5 21:45:34.042512 containerd[1701]: time="2024-08-05T21:45:34.042357317Z" level=info msg="Forcibly stopping sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\"" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.079 [WARNING][5617] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"2e6ea574-1f1c-4e52-898c-5cca482be0c0", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"425124cfb6484e609150034155c6f1f51571858251ce4c852a05ccf9f53e1202", Pod:"coredns-76f75df574-gj2wt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.54.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib30b7817879", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.079 [INFO][5617] k8s.go 608: Cleaning up netns ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.079 [INFO][5617] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" iface="eth0" netns="" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.079 [INFO][5617] k8s.go 615: Releasing IP address(es) ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.079 [INFO][5617] utils.go 188: Calico CNI releasing IP address ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.098 [INFO][5624] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.098 [INFO][5624] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.098 [INFO][5624] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.107 [WARNING][5624] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.107 [INFO][5624] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" HandleID="k8s-pod-network.4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Workload="ci--4012.1.0--a--183bdb833d-k8s-coredns--76f75df574--gj2wt-eth0" Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.108 [INFO][5624] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:34.112455 containerd[1701]: 2024-08-05 21:45:34.109 [INFO][5617] k8s.go 621: Teardown processing complete. ContainerID="4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1" Aug 5 21:45:34.112455 containerd[1701]: time="2024-08-05T21:45:34.111418317Z" level=info msg="TearDown network for sandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" successfully" Aug 5 21:45:34.118839 containerd[1701]: time="2024-08-05T21:45:34.118791929Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:45:34.118957 containerd[1701]: time="2024-08-05T21:45:34.118896090Z" level=info msg="RemovePodSandbox \"4b757bfadfa141111673f75195e86f0ccd7d62784918b5fc000a53a6b6ed54e1\" returns successfully" Aug 5 21:45:34.119595 containerd[1701]: time="2024-08-05T21:45:34.119568811Z" level=info msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\"" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.156 [WARNING][5642] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0", GenerateName:"calico-kube-controllers-5f9f5bd8b5-", Namespace:"calico-system", SelfLink:"", UID:"055db446-940a-4e53-beae-809dff8b6ada", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9f5bd8b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39", Pod:"calico-kube-controllers-5f9f5bd8b5-h2rwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib559fb93c57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.157 [INFO][5642] k8s.go 608: Cleaning up netns ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.157 [INFO][5642] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" iface="eth0" netns="" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.157 [INFO][5642] k8s.go 615: Releasing IP address(es) ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.157 [INFO][5642] utils.go 188: Calico CNI releasing IP address ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.174 [INFO][5648] ipam_plugin.go 411: Releasing address using handleID ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.174 [INFO][5648] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.174 [INFO][5648] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.182 [WARNING][5648] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.182 [INFO][5648] ipam_plugin.go 439: Releasing address using workloadID ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.183 [INFO][5648] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:34.186260 containerd[1701]: 2024-08-05 21:45:34.185 [INFO][5642] k8s.go 621: Teardown processing complete. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.187378 containerd[1701]: time="2024-08-05T21:45:34.186275447Z" level=info msg="TearDown network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" successfully" Aug 5 21:45:34.187378 containerd[1701]: time="2024-08-05T21:45:34.186305967Z" level=info msg="StopPodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" returns successfully" Aug 5 21:45:34.187378 containerd[1701]: time="2024-08-05T21:45:34.186850048Z" level=info msg="RemovePodSandbox for \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\"" Aug 5 21:45:34.187378 containerd[1701]: time="2024-08-05T21:45:34.186883808Z" level=info msg="Forcibly stopping sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\"" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.223 [WARNING][5666] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0", GenerateName:"calico-kube-controllers-5f9f5bd8b5-", Namespace:"calico-system", SelfLink:"", UID:"055db446-940a-4e53-beae-809dff8b6ada", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f9f5bd8b5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"c1f6668012f23a8cc5996519cb81ccec51ef3d4d0fa5a6920398c978a5639e39", Pod:"calico-kube-controllers-5f9f5bd8b5-h2rwc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.54.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib559fb93c57", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.224 [INFO][5666] k8s.go 608: Cleaning up netns ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.224 [INFO][5666] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" iface="eth0" netns="" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.224 [INFO][5666] k8s.go 615: Releasing IP address(es) ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.224 [INFO][5666] utils.go 188: Calico CNI releasing IP address ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.242 [INFO][5672] ipam_plugin.go 411: Releasing address using handleID ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.242 [INFO][5672] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.242 [INFO][5672] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.250 [WARNING][5672] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.250 [INFO][5672] ipam_plugin.go 439: Releasing address using workloadID ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" HandleID="k8s-pod-network.34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Workload="ci--4012.1.0--a--183bdb833d-k8s-calico--kube--controllers--5f9f5bd8b5--h2rwc-eth0" Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.252 [INFO][5672] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:34.255135 containerd[1701]: 2024-08-05 21:45:34.253 [INFO][5666] k8s.go 621: Teardown processing complete. ContainerID="34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2" Aug 5 21:45:34.255606 containerd[1701]: time="2024-08-05T21:45:34.255211486Z" level=info msg="TearDown network for sandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" successfully" Aug 5 21:45:34.261867 containerd[1701]: time="2024-08-05T21:45:34.261816498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:45:34.261929 containerd[1701]: time="2024-08-05T21:45:34.261911138Z" level=info msg="RemovePodSandbox \"34931e424330c07f58954a40be78b88b0f088c4f745072fe52ea9a576dde22d2\" returns successfully" Aug 5 21:45:34.262614 containerd[1701]: time="2024-08-05T21:45:34.262568219Z" level=info msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\"" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.297 [WARNING][5690] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a43c2e97-b54c-4d04-a78d-358682744b6a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766", Pod:"csi-node-driver-grpvp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali980cd03338d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.297 [INFO][5690] k8s.go 608: Cleaning up netns ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.297 [INFO][5690] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" iface="eth0" netns="" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.297 [INFO][5690] k8s.go 615: Releasing IP address(es) ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.297 [INFO][5690] utils.go 188: Calico CNI releasing IP address ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.316 [INFO][5696] ipam_plugin.go 411: Releasing address using handleID ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.316 [INFO][5696] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.316 [INFO][5696] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.325 [WARNING][5696] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.325 [INFO][5696] ipam_plugin.go 439: Releasing address using workloadID ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.326 [INFO][5696] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:34.329199 containerd[1701]: 2024-08-05 21:45:34.327 [INFO][5690] k8s.go 621: Teardown processing complete. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.329199 containerd[1701]: time="2024-08-05T21:45:34.329148774Z" level=info msg="TearDown network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" successfully" Aug 5 21:45:34.329199 containerd[1701]: time="2024-08-05T21:45:34.329188014Z" level=info msg="StopPodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" returns successfully" Aug 5 21:45:34.329637 containerd[1701]: time="2024-08-05T21:45:34.329579575Z" level=info msg="RemovePodSandbox for \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\"" Aug 5 21:45:34.329661 containerd[1701]: time="2024-08-05T21:45:34.329616055Z" level=info msg="Forcibly stopping sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\"" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.363 [WARNING][5714] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a43c2e97-b54c-4d04-a78d-358682744b6a", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 21, 43, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-183bdb833d", ContainerID:"00b2caaa37195058c4aada25310064887929f336c045f962a0414e407f851766", Pod:"csi-node-driver-grpvp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali980cd03338d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.363 [INFO][5714] k8s.go 608: Cleaning up netns ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.363 [INFO][5714] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" iface="eth0" netns="" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.363 [INFO][5714] k8s.go 615: Releasing IP address(es) ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.363 [INFO][5714] utils.go 188: Calico CNI releasing IP address ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.383 [INFO][5720] ipam_plugin.go 411: Releasing address using handleID ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.383 [INFO][5720] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.383 [INFO][5720] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.391 [WARNING][5720] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.391 [INFO][5720] ipam_plugin.go 439: Releasing address using workloadID ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" HandleID="k8s-pod-network.776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Workload="ci--4012.1.0--a--183bdb833d-k8s-csi--node--driver--grpvp-eth0" Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.393 [INFO][5720] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 21:45:34.395833 containerd[1701]: 2024-08-05 21:45:34.394 [INFO][5714] k8s.go 621: Teardown processing complete. ContainerID="776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a" Aug 5 21:45:34.396303 containerd[1701]: time="2024-08-05T21:45:34.395826810Z" level=info msg="TearDown network for sandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" successfully" Aug 5 21:45:34.403763 containerd[1701]: time="2024-08-05T21:45:34.403713624Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 21:45:34.403866 containerd[1701]: time="2024-08-05T21:45:34.403822464Z" level=info msg="RemovePodSandbox \"776fa6c0f54a84aeb73794519446a60c7e8282fe26aac5aa6df3765b6243163a\" returns successfully" Aug 5 21:46:03.745491 systemd[1]: Started sshd@7-10.200.20.35:22-10.200.16.10:44850.service - OpenSSH per-connection server daemon (10.200.16.10:44850). Aug 5 21:46:04.207493 sshd[5796]: Accepted publickey for core from 10.200.16.10 port 44850 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:04.211949 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:04.216433 systemd-logind[1666]: New session 10 of user core. Aug 5 21:46:04.227291 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 21:46:04.629949 sshd[5796]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:04.634309 systemd-logind[1666]: Session 10 logged out. Waiting for processes to exit. Aug 5 21:46:04.635659 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 21:46:04.636711 systemd[1]: sshd@7-10.200.20.35:22-10.200.16.10:44850.service: Deactivated successfully. Aug 5 21:46:04.639965 systemd-logind[1666]: Removed session 10. Aug 5 21:46:09.717429 systemd[1]: Started sshd@8-10.200.20.35:22-10.200.16.10:51200.service - OpenSSH per-connection server daemon (10.200.16.10:51200). Aug 5 21:46:10.173551 sshd[5810]: Accepted publickey for core from 10.200.16.10 port 51200 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:10.174853 sshd[5810]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:10.178646 systemd-logind[1666]: New session 11 of user core. Aug 5 21:46:10.189336 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 21:46:10.587238 sshd[5810]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:10.591018 systemd-logind[1666]: Session 11 logged out. Waiting for processes to exit. Aug 5 21:46:10.591096 systemd[1]: sshd@8-10.200.20.35:22-10.200.16.10:51200.service: Deactivated successfully. Aug 5 21:46:10.593405 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 21:46:10.595244 systemd-logind[1666]: Removed session 11. Aug 5 21:46:15.666239 systemd[1]: Started sshd@9-10.200.20.35:22-10.200.16.10:51202.service - OpenSSH per-connection server daemon (10.200.16.10:51202). Aug 5 21:46:16.097966 sshd[5850]: Accepted publickey for core from 10.200.16.10 port 51202 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:16.099259 sshd[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:16.106500 systemd-logind[1666]: New session 12 of user core. Aug 5 21:46:16.116354 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 21:46:16.497964 sshd[5850]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:16.500488 systemd-logind[1666]: Session 12 logged out. Waiting for processes to exit. Aug 5 21:46:16.501003 systemd[1]: sshd@9-10.200.20.35:22-10.200.16.10:51202.service: Deactivated successfully. Aug 5 21:46:16.503925 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 21:46:16.506428 systemd-logind[1666]: Removed session 12. Aug 5 21:46:21.576850 systemd[1]: Started sshd@10-10.200.20.35:22-10.200.16.10:34308.service - OpenSSH per-connection server daemon (10.200.16.10:34308). Aug 5 21:46:22.003975 sshd[5897]: Accepted publickey for core from 10.200.16.10 port 34308 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:22.005543 sshd[5897]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:22.012054 systemd-logind[1666]: New session 13 of user core. Aug 5 21:46:22.015326 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 21:46:22.405896 sshd[5897]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:22.409381 systemd[1]: sshd@10-10.200.20.35:22-10.200.16.10:34308.service: Deactivated successfully. Aug 5 21:46:22.411055 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 21:46:22.411800 systemd-logind[1666]: Session 13 logged out. Waiting for processes to exit. Aug 5 21:46:22.412844 systemd-logind[1666]: Removed session 13. Aug 5 21:46:22.491093 systemd[1]: Started sshd@11-10.200.20.35:22-10.200.16.10:34320.service - OpenSSH per-connection server daemon (10.200.16.10:34320). Aug 5 21:46:22.951047 sshd[5915]: Accepted publickey for core from 10.200.16.10 port 34320 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:22.952415 sshd[5915]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:22.956346 systemd-logind[1666]: New session 14 of user core. Aug 5 21:46:22.964330 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 21:46:23.375303 sshd[5915]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:23.378865 systemd[1]: sshd@11-10.200.20.35:22-10.200.16.10:34320.service: Deactivated successfully. Aug 5 21:46:23.381405 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 21:46:23.382596 systemd-logind[1666]: Session 14 logged out. Waiting for processes to exit. Aug 5 21:46:23.383621 systemd-logind[1666]: Removed session 14. Aug 5 21:46:23.457857 systemd[1]: Started sshd@12-10.200.20.35:22-10.200.16.10:34330.service - OpenSSH per-connection server daemon (10.200.16.10:34330). Aug 5 21:46:23.896863 sshd[5926]: Accepted publickey for core from 10.200.16.10 port 34330 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:23.898257 sshd[5926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:23.902007 systemd-logind[1666]: New session 15 of user core. Aug 5 21:46:23.908336 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 21:46:24.297779 sshd[5926]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:24.301058 systemd[1]: sshd@12-10.200.20.35:22-10.200.16.10:34330.service: Deactivated successfully. Aug 5 21:46:24.303143 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 21:46:24.303943 systemd-logind[1666]: Session 15 logged out. Waiting for processes to exit. Aug 5 21:46:24.304839 systemd-logind[1666]: Removed session 15. Aug 5 21:46:29.384460 systemd[1]: Started sshd@13-10.200.20.35:22-10.200.16.10:48672.service - OpenSSH per-connection server daemon (10.200.16.10:48672). Aug 5 21:46:29.844320 sshd[5945]: Accepted publickey for core from 10.200.16.10 port 48672 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:29.845694 sshd[5945]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:29.849718 systemd-logind[1666]: New session 16 of user core. Aug 5 21:46:29.855308 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 21:46:30.240098 sshd[5945]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:30.242891 systemd-logind[1666]: Session 16 logged out. Waiting for processes to exit. Aug 5 21:46:30.243102 systemd[1]: sshd@13-10.200.20.35:22-10.200.16.10:48672.service: Deactivated successfully. Aug 5 21:46:30.245316 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 21:46:30.247469 systemd-logind[1666]: Removed session 16. Aug 5 21:46:35.323788 systemd[1]: Started sshd@14-10.200.20.35:22-10.200.16.10:48684.service - OpenSSH per-connection server daemon (10.200.16.10:48684). Aug 5 21:46:35.789303 sshd[5987]: Accepted publickey for core from 10.200.16.10 port 48684 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:35.790650 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:35.794582 systemd-logind[1666]: New session 17 of user core. Aug 5 21:46:35.802343 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 21:46:36.185283 sshd[5987]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:36.188716 systemd[1]: sshd@14-10.200.20.35:22-10.200.16.10:48684.service: Deactivated successfully. Aug 5 21:46:36.190746 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 21:46:36.191565 systemd-logind[1666]: Session 17 logged out. Waiting for processes to exit. Aug 5 21:46:36.192411 systemd-logind[1666]: Removed session 17. Aug 5 21:46:41.265105 systemd[1]: Started sshd@15-10.200.20.35:22-10.200.16.10:40312.service - OpenSSH per-connection server daemon (10.200.16.10:40312). Aug 5 21:46:41.696382 sshd[6001]: Accepted publickey for core from 10.200.16.10 port 40312 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:41.697666 sshd[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:41.702314 systemd-logind[1666]: New session 18 of user core. Aug 5 21:46:41.709331 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 21:46:42.099512 sshd[6001]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:42.103038 systemd[1]: sshd@15-10.200.20.35:22-10.200.16.10:40312.service: Deactivated successfully. Aug 5 21:46:42.104767 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 21:46:42.105471 systemd-logind[1666]: Session 18 logged out. Waiting for processes to exit. Aug 5 21:46:42.106348 systemd-logind[1666]: Removed session 18. Aug 5 21:46:42.177648 systemd[1]: Started sshd@16-10.200.20.35:22-10.200.16.10:40322.service - OpenSSH per-connection server daemon (10.200.16.10:40322). Aug 5 21:46:42.603577 sshd[6014]: Accepted publickey for core from 10.200.16.10 port 40322 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:42.604912 sshd[6014]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:42.608774 systemd-logind[1666]: New session 19 of user core. Aug 5 21:46:42.615298 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 21:46:43.121942 sshd[6014]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:43.125059 systemd-logind[1666]: Session 19 logged out. Waiting for processes to exit. Aug 5 21:46:43.125359 systemd[1]: sshd@16-10.200.20.35:22-10.200.16.10:40322.service: Deactivated successfully. Aug 5 21:46:43.127049 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 21:46:43.131482 systemd-logind[1666]: Removed session 19. Aug 5 21:46:43.208849 systemd[1]: Started sshd@17-10.200.20.35:22-10.200.16.10:40326.service - OpenSSH per-connection server daemon (10.200.16.10:40326). Aug 5 21:46:43.670346 sshd[6024]: Accepted publickey for core from 10.200.16.10 port 40326 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:43.671712 sshd[6024]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:43.675958 systemd-logind[1666]: New session 20 of user core. Aug 5 21:46:43.680305 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 21:46:45.561146 sshd[6024]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:45.564859 systemd[1]: sshd@17-10.200.20.35:22-10.200.16.10:40326.service: Deactivated successfully. Aug 5 21:46:45.566773 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 21:46:45.567630 systemd-logind[1666]: Session 20 logged out. Waiting for processes to exit. Aug 5 21:46:45.569153 systemd-logind[1666]: Removed session 20. Aug 5 21:46:45.649529 systemd[1]: Started sshd@18-10.200.20.35:22-10.200.16.10:40334.service - OpenSSH per-connection server daemon (10.200.16.10:40334). Aug 5 21:46:46.075398 sshd[6048]: Accepted publickey for core from 10.200.16.10 port 40334 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:46.076149 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:46.079780 systemd-logind[1666]: New session 21 of user core. Aug 5 21:46:46.086525 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 21:46:46.573264 sshd[6048]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:46.576679 systemd[1]: sshd@18-10.200.20.35:22-10.200.16.10:40334.service: Deactivated successfully. Aug 5 21:46:46.579106 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 21:46:46.580467 systemd-logind[1666]: Session 21 logged out. Waiting for processes to exit. Aug 5 21:46:46.581999 systemd-logind[1666]: Removed session 21. Aug 5 21:46:46.667751 systemd[1]: Started sshd@19-10.200.20.35:22-10.200.16.10:40340.service - OpenSSH per-connection server daemon (10.200.16.10:40340). Aug 5 21:46:47.124121 sshd[6059]: Accepted publickey for core from 10.200.16.10 port 40340 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:47.125476 sshd[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:47.129929 systemd-logind[1666]: New session 22 of user core. Aug 5 21:46:47.136304 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 21:46:47.524387 sshd[6059]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:47.527275 systemd-logind[1666]: Session 22 logged out. Waiting for processes to exit. Aug 5 21:46:47.527420 systemd[1]: sshd@19-10.200.20.35:22-10.200.16.10:40340.service: Deactivated successfully. Aug 5 21:46:47.529020 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 21:46:47.530961 systemd-logind[1666]: Removed session 22. Aug 5 21:46:52.618437 systemd[1]: Started sshd@20-10.200.20.35:22-10.200.16.10:36024.service - OpenSSH per-connection server daemon (10.200.16.10:36024). Aug 5 21:46:53.076980 sshd[6093]: Accepted publickey for core from 10.200.16.10 port 36024 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:53.078398 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:53.082767 systemd-logind[1666]: New session 23 of user core. Aug 5 21:46:53.090347 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 21:46:53.479419 sshd[6093]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:53.483488 systemd-logind[1666]: Session 23 logged out. Waiting for processes to exit. Aug 5 21:46:53.483824 systemd[1]: sshd@20-10.200.20.35:22-10.200.16.10:36024.service: Deactivated successfully. Aug 5 21:46:53.485839 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 21:46:53.487002 systemd-logind[1666]: Removed session 23. Aug 5 21:46:58.566417 systemd[1]: Started sshd@21-10.200.20.35:22-10.200.16.10:36028.service - OpenSSH per-connection server daemon (10.200.16.10:36028). Aug 5 21:46:58.988760 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 36028 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:46:58.990048 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:46:58.994330 systemd-logind[1666]: New session 24 of user core. Aug 5 21:46:58.998382 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 21:46:59.384804 sshd[6114]: pam_unix(sshd:session): session closed for user core Aug 5 21:46:59.388316 systemd-logind[1666]: Session 24 logged out. Waiting for processes to exit. Aug 5 21:46:59.388592 systemd[1]: sshd@21-10.200.20.35:22-10.200.16.10:36028.service: Deactivated successfully. Aug 5 21:46:59.390778 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 21:46:59.392715 systemd-logind[1666]: Removed session 24. Aug 5 21:47:04.469836 systemd[1]: Started sshd@22-10.200.20.35:22-10.200.16.10:33036.service - OpenSSH per-connection server daemon (10.200.16.10:33036). Aug 5 21:47:04.935809 sshd[6149]: Accepted publickey for core from 10.200.16.10 port 33036 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:47:04.937098 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:04.941641 systemd-logind[1666]: New session 25 of user core. Aug 5 21:47:04.945305 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 21:47:05.337441 sshd[6149]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:05.340449 systemd[1]: sshd@22-10.200.20.35:22-10.200.16.10:33036.service: Deactivated successfully. Aug 5 21:47:05.342805 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 21:47:05.344214 systemd-logind[1666]: Session 25 logged out. Waiting for processes to exit. Aug 5 21:47:05.345635 systemd-logind[1666]: Removed session 25. Aug 5 21:47:10.424769 systemd[1]: Started sshd@23-10.200.20.35:22-10.200.16.10:41186.service - OpenSSH per-connection server daemon (10.200.16.10:41186). Aug 5 21:47:10.851207 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 41186 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:47:10.852430 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:10.856854 systemd-logind[1666]: New session 26 of user core. Aug 5 21:47:10.862299 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 21:47:11.250553 sshd[6168]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:11.253653 systemd[1]: sshd@23-10.200.20.35:22-10.200.16.10:41186.service: Deactivated successfully. Aug 5 21:47:11.256794 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 21:47:11.257856 systemd-logind[1666]: Session 26 logged out. Waiting for processes to exit. Aug 5 21:47:11.259022 systemd-logind[1666]: Removed session 26. Aug 5 21:47:16.334387 systemd[1]: Started sshd@24-10.200.20.35:22-10.200.16.10:41196.service - OpenSSH per-connection server daemon (10.200.16.10:41196). Aug 5 21:47:16.759570 sshd[6208]: Accepted publickey for core from 10.200.16.10 port 41196 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:47:16.760971 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:16.765588 systemd-logind[1666]: New session 27 of user core. Aug 5 21:47:16.773300 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 21:47:17.153447 sshd[6208]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:17.156964 systemd[1]: sshd@24-10.200.20.35:22-10.200.16.10:41196.service: Deactivated successfully. Aug 5 21:47:17.159583 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 21:47:17.160603 systemd-logind[1666]: Session 27 logged out. Waiting for processes to exit. Aug 5 21:47:17.161759 systemd-logind[1666]: Removed session 27. Aug 5 21:47:22.238322 systemd[1]: Started sshd@25-10.200.20.35:22-10.200.16.10:50186.service - OpenSSH per-connection server daemon (10.200.16.10:50186). Aug 5 21:47:22.698502 sshd[6248]: Accepted publickey for core from 10.200.16.10 port 50186 ssh2: RSA SHA256:2YfCcJx2I76XU6FoJZeks0f26dkMePQ1H4MTp8bVOeI Aug 5 21:47:22.700117 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 21:47:22.703707 systemd-logind[1666]: New session 28 of user core. Aug 5 21:47:22.711373 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 21:47:23.104433 sshd[6248]: pam_unix(sshd:session): session closed for user core Aug 5 21:47:23.107398 systemd[1]: sshd@25-10.200.20.35:22-10.200.16.10:50186.service: Deactivated successfully. Aug 5 21:47:23.109338 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 21:47:23.110804 systemd-logind[1666]: Session 28 logged out. Waiting for processes to exit. Aug 5 21:47:23.111626 systemd-logind[1666]: Removed session 28.