Feb 9 18:31:58.058819 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:31:58.058838 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:31:58.058846 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 18:31:58.058853 kernel: printk: bootconsole [pl11] enabled Feb 9 18:31:58.058858 kernel: efi: EFI v2.70 by EDK II Feb 9 18:31:58.058864 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef3c198 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 18:31:58.058870 kernel: random: crng init done Feb 9 18:31:58.058876 kernel: ACPI: Early table checksum verification disabled Feb 9 18:31:58.058881 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 18:31:58.058887 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058893 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058899 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 18:31:58.058905 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058910 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058917 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058923 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058929 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058936 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058941 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 18:31:58.058947 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:58.058953 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 18:31:58.058959 kernel: NUMA: Failed to initialise from firmware Feb 9 18:31:58.058964 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:31:58.058970 kernel: NUMA: NODE_DATA [mem 0x1bf7f0900-0x1bf7f5fff] Feb 9 18:31:58.058976 kernel: Zone ranges: Feb 9 18:31:58.058982 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 18:31:58.058987 kernel: DMA32 empty Feb 9 18:31:58.058994 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:31:58.059000 kernel: Movable zone start for each node Feb 9 18:31:58.059005 kernel: Early memory node ranges Feb 9 18:31:58.059011 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 18:31:58.059017 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 18:31:58.059022 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 18:31:58.059028 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 18:31:58.059034 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 18:31:58.059039 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 18:31:58.059045 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 18:31:58.059051 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 18:31:58.059056 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:31:58.059063 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:31:58.059072 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 18:31:58.059078 kernel: psci: probing for conduit method from ACPI. Feb 9 18:31:58.059084 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:31:58.059090 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:31:58.059097 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 18:31:58.059103 kernel: psci: SMC Calling Convention v1.4 Feb 9 18:31:58.059109 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 18:31:58.059115 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 18:31:58.059121 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:31:58.059127 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:31:58.059133 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 18:31:58.059139 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:31:58.059145 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:31:58.059151 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:31:58.059158 kernel: CPU features: detected: Spectre-BHB Feb 9 18:31:58.059164 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:31:58.059171 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:31:58.059177 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:31:58.059183 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 18:31:58.059189 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 18:31:58.059195 kernel: Policy zone: Normal Feb 9 18:31:58.059203 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:31:58.059210 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:31:58.059216 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:31:58.059222 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:31:58.059228 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:31:58.059235 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 18:31:58.059242 kernel: Memory: 3991928K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202232K reserved, 0K cma-reserved) Feb 9 18:31:58.059248 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:31:58.059254 kernel: trace event string verifier disabled Feb 9 18:31:58.059260 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:31:58.059267 kernel: rcu: RCU event tracing is enabled. Feb 9 18:31:58.059273 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:31:58.059279 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:31:58.059285 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:31:58.059291 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:31:58.059298 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:31:58.059305 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:31:58.059311 kernel: GICv3: 960 SPIs implemented Feb 9 18:31:58.059317 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:31:58.059323 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:31:58.059329 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:31:58.059335 kernel: GICv3: 16 PPIs implemented Feb 9 18:31:58.059341 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 18:31:58.059347 kernel: ITS: No ITS available, not enabling LPIs Feb 9 18:31:58.059353 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:31:58.059359 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:31:58.059366 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:31:58.059372 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:31:58.059380 kernel: Console: colour dummy device 80x25 Feb 9 18:31:58.059386 kernel: printk: console [tty1] enabled Feb 9 18:31:58.059392 kernel: ACPI: Core revision 20210730 Feb 9 18:31:58.059399 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:31:58.059405 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:31:58.059411 kernel: LSM: Security Framework initializing Feb 9 18:31:58.059417 kernel: SELinux: Initializing. Feb 9 18:31:58.059424 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:31:58.059430 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:31:58.059438 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 18:31:58.059444 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 18:31:58.059450 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:31:58.059457 kernel: Remapping and enabling EFI services. Feb 9 18:31:58.059463 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:31:58.059469 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:31:58.059476 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 18:31:58.059482 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:31:58.059488 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:31:58.059495 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:31:58.059502 kernel: SMP: Total of 2 processors activated. Feb 9 18:31:58.059508 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:31:58.059514 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 18:31:58.059521 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:31:58.059527 kernel: CPU features: detected: CRC32 instructions Feb 9 18:31:58.059533 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:31:58.059540 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:31:58.059546 kernel: CPU features: detected: Privileged Access Never Feb 9 18:31:58.059553 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:31:58.059559 kernel: alternatives: patching kernel code Feb 9 18:31:58.059570 kernel: devtmpfs: initialized Feb 9 18:31:58.059578 kernel: KASLR enabled Feb 9 18:31:58.059584 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:31:58.059591 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:31:58.059598 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:31:58.059604 kernel: SMBIOS 3.1.0 present. Feb 9 18:31:58.059611 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 18:31:58.059618 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:31:58.059625 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:31:58.059632 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:31:58.059639 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:31:58.059645 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:31:58.059666 kernel: audit: type=2000 audit(0.112:1): state=initialized audit_enabled=0 res=1 Feb 9 18:31:58.059673 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:31:58.059680 kernel: cpuidle: using governor menu Feb 9 18:31:58.059688 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:31:58.059695 kernel: ASID allocator initialised with 32768 entries Feb 9 18:31:58.059701 kernel: ACPI: bus type PCI registered Feb 9 18:31:58.059708 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:31:58.059714 kernel: Serial: AMBA PL011 UART driver Feb 9 18:31:58.059721 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:31:58.059727 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:31:58.059734 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:31:58.059741 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:31:58.059748 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:31:58.059755 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:31:58.059761 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:31:58.059768 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:31:58.059774 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:31:58.059781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:31:58.059788 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:31:58.059794 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:31:58.059801 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:31:58.059809 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:31:58.059815 kernel: ACPI: Interpreter enabled Feb 9 18:31:58.059822 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:31:58.059829 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:31:58.059835 kernel: printk: console [ttyAMA0] enabled Feb 9 18:31:58.059842 kernel: printk: bootconsole [pl11] disabled Feb 9 18:31:58.059849 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 18:31:58.059855 kernel: iommu: Default domain type: Translated Feb 9 18:31:58.059862 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:31:58.059869 kernel: vgaarb: loaded Feb 9 18:31:58.059876 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:31:58.059883 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:31:58.059889 kernel: PTP clock support registered Feb 9 18:31:58.059896 kernel: Registered efivars operations Feb 9 18:31:58.059902 kernel: No ACPI PMU IRQ for CPU0 Feb 9 18:31:58.059909 kernel: No ACPI PMU IRQ for CPU1 Feb 9 18:31:58.059915 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:31:58.059922 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:31:58.059930 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:31:58.059936 kernel: pnp: PnP ACPI init Feb 9 18:31:58.059943 kernel: pnp: PnP ACPI: found 0 devices Feb 9 18:31:58.059949 kernel: NET: Registered PF_INET protocol family Feb 9 18:31:58.059956 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:31:58.059962 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:31:58.059969 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:31:58.059976 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:31:58.059983 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:31:58.059990 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:31:58.059997 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:31:58.060004 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:31:58.060010 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:31:58.060017 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:31:58.060024 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 18:31:58.060031 kernel: kvm [1]: HYP mode not available Feb 9 18:31:58.060037 kernel: Initialise system trusted keyrings Feb 9 18:31:58.060044 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:31:58.060051 kernel: Key type asymmetric registered Feb 9 18:31:58.060058 kernel: Asymmetric key parser 'x509' registered Feb 9 18:31:58.060064 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:31:58.060071 kernel: io scheduler mq-deadline registered Feb 9 18:31:58.060078 kernel: io scheduler kyber registered Feb 9 18:31:58.060084 kernel: io scheduler bfq registered Feb 9 18:31:58.060091 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:31:58.060097 kernel: thunder_xcv, ver 1.0 Feb 9 18:31:58.060104 kernel: thunder_bgx, ver 1.0 Feb 9 18:31:58.060111 kernel: nicpf, ver 1.0 Feb 9 18:31:58.060118 kernel: nicvf, ver 1.0 Feb 9 18:31:58.060243 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:31:58.060311 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:31:57 UTC (1707503517) Feb 9 18:31:58.060320 kernel: efifb: probing for efifb Feb 9 18:31:58.060327 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 18:31:58.060334 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 18:31:58.060341 kernel: efifb: scrolling: redraw Feb 9 18:31:58.060350 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 18:31:58.060357 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:31:58.060364 kernel: fb0: EFI VGA frame buffer device Feb 9 18:31:58.060371 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 18:31:58.060378 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:31:58.060385 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:31:58.060392 kernel: Segment Routing with IPv6 Feb 9 18:31:58.060398 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:31:58.060405 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:31:58.060413 kernel: Key type dns_resolver registered Feb 9 18:31:58.060420 kernel: registered taskstats version 1 Feb 9 18:31:58.060427 kernel: Loading compiled-in X.509 certificates Feb 9 18:31:58.060434 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:31:58.060441 kernel: Key type .fscrypt registered Feb 9 18:31:58.060448 kernel: Key type fscrypt-provisioning registered Feb 9 18:31:58.060455 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:31:58.060462 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:31:58.060489 kernel: ima: No architecture policies found Feb 9 18:31:58.060498 kernel: Freeing unused kernel memory: 34688K Feb 9 18:31:58.060505 kernel: Run /init as init process Feb 9 18:31:58.060512 kernel: with arguments: Feb 9 18:31:58.060519 kernel: /init Feb 9 18:31:58.060526 kernel: with environment: Feb 9 18:31:58.060533 kernel: HOME=/ Feb 9 18:31:58.060540 kernel: TERM=linux Feb 9 18:31:58.060546 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:31:58.060555 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:31:58.060566 systemd[1]: Detected virtualization microsoft. Feb 9 18:31:58.060574 systemd[1]: Detected architecture arm64. Feb 9 18:31:58.060581 systemd[1]: Running in initrd. Feb 9 18:31:58.060588 systemd[1]: No hostname configured, using default hostname. Feb 9 18:31:58.060595 systemd[1]: Hostname set to . Feb 9 18:31:58.060603 systemd[1]: Initializing machine ID from random generator. Feb 9 18:31:58.060610 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:31:58.060619 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:31:58.060626 systemd[1]: Reached target cryptsetup.target. Feb 9 18:31:58.060633 systemd[1]: Reached target paths.target. Feb 9 18:31:58.060641 systemd[1]: Reached target slices.target. Feb 9 18:31:58.060648 systemd[1]: Reached target swap.target. Feb 9 18:31:58.060668 systemd[1]: Reached target timers.target. Feb 9 18:31:58.060676 systemd[1]: Listening on iscsid.socket. Feb 9 18:31:58.060684 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:31:58.060692 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:31:58.060702 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:31:58.060709 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:31:58.060717 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:31:58.060724 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:31:58.060732 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:31:58.060739 systemd[1]: Reached target sockets.target. Feb 9 18:31:58.060747 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:31:58.060754 systemd[1]: Finished network-cleanup.service. Feb 9 18:31:58.060763 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:31:58.060770 systemd[1]: Starting systemd-journald.service... Feb 9 18:31:58.060778 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:31:58.060785 systemd[1]: Starting systemd-resolved.service... Feb 9 18:31:58.060793 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:31:58.060804 systemd-journald[276]: Journal started Feb 9 18:31:58.060845 systemd-journald[276]: Runtime Journal (/run/log/journal/62588f23c9cb4d5ab90ba040b36281df) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:31:58.051691 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 18:31:58.084294 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:31:58.089727 systemd-resolved[278]: Positive Trust Anchors: Feb 9 18:31:58.089735 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:31:58.089762 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:31:58.190648 systemd[1]: Started systemd-journald.service. Feb 9 18:31:58.190679 kernel: Bridge firewalling registered Feb 9 18:31:58.190690 kernel: audit: type=1130 audit(1707503518.160:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.190699 kernel: SCSI subsystem initialized Feb 9 18:31:58.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.091849 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 18:31:58.266306 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:31:58.266330 kernel: audit: type=1130 audit(1707503518.193:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.266341 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:31:58.266357 kernel: audit: type=1130 audit(1707503518.216:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.266366 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:31:58.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.156457 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 18:31:58.320705 kernel: audit: type=1130 audit(1707503518.271:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.320734 kernel: audit: type=1130 audit(1707503518.299:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.185113 systemd[1]: Started systemd-resolved.service. Feb 9 18:31:58.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.194374 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:31:58.357402 kernel: audit: type=1130 audit(1707503518.327:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.217024 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:31:58.271878 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:31:58.278385 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 18:31:58.299974 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:31:58.352000 systemd[1]: Reached target nss-lookup.target. Feb 9 18:31:58.368246 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:31:58.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.374312 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:31:58.455490 kernel: audit: type=1130 audit(1707503518.422:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.391746 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:31:58.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.404816 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:31:58.491276 kernel: audit: type=1130 audit(1707503518.449:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.424111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:31:58.521044 kernel: audit: type=1130 audit(1707503518.485:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.455677 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:31:58.487344 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:31:58.535874 dracut-cmdline[298]: dracut-dracut-053 Feb 9 18:31:58.541362 dracut-cmdline[298]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:31:58.631676 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:31:58.645677 kernel: iscsi: registered transport (tcp) Feb 9 18:31:58.665598 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:31:58.665645 kernel: QLogic iSCSI HBA Driver Feb 9 18:31:58.695119 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:31:58.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.700839 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:31:58.758676 kernel: raid6: neonx8 gen() 13812 MB/s Feb 9 18:31:58.776669 kernel: raid6: neonx8 xor() 10827 MB/s Feb 9 18:31:58.797674 kernel: raid6: neonx4 gen() 13472 MB/s Feb 9 18:31:58.819688 kernel: raid6: neonx4 xor() 11297 MB/s Feb 9 18:31:58.840690 kernel: raid6: neonx2 gen() 12974 MB/s Feb 9 18:31:58.861673 kernel: raid6: neonx2 xor() 10239 MB/s Feb 9 18:31:58.883678 kernel: raid6: neonx1 gen() 10489 MB/s Feb 9 18:31:58.904689 kernel: raid6: neonx1 xor() 8757 MB/s Feb 9 18:31:58.925674 kernel: raid6: int64x8 gen() 6292 MB/s Feb 9 18:31:58.948664 kernel: raid6: int64x8 xor() 3548 MB/s Feb 9 18:31:58.969663 kernel: raid6: int64x4 gen() 7239 MB/s Feb 9 18:31:58.990661 kernel: raid6: int64x4 xor() 3851 MB/s Feb 9 18:31:59.012661 kernel: raid6: int64x2 gen() 6153 MB/s Feb 9 18:31:59.033660 kernel: raid6: int64x2 xor() 3322 MB/s Feb 9 18:31:59.055661 kernel: raid6: int64x1 gen() 5018 MB/s Feb 9 18:31:59.083138 kernel: raid6: int64x1 xor() 2645 MB/s Feb 9 18:31:59.083149 kernel: raid6: using algorithm neonx8 gen() 13812 MB/s Feb 9 18:31:59.083157 kernel: raid6: .... xor() 10827 MB/s, rmw enabled Feb 9 18:31:59.088235 kernel: raid6: using neon recovery algorithm Feb 9 18:31:59.113663 kernel: xor: measuring software checksum speed Feb 9 18:31:59.113677 kernel: 8regs : 17304 MB/sec Feb 9 18:31:59.118573 kernel: 32regs : 20755 MB/sec Feb 9 18:31:59.130040 kernel: arm64_neon : 27882 MB/sec Feb 9 18:31:59.130051 kernel: xor: using function: arm64_neon (27882 MB/sec) Feb 9 18:31:59.186667 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:31:59.195840 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:31:59.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:59.204000 audit: BPF prog-id=7 op=LOAD Feb 9 18:31:59.204000 audit: BPF prog-id=8 op=LOAD Feb 9 18:31:59.206118 systemd[1]: Starting systemd-udevd.service... Feb 9 18:31:59.221885 systemd-udevd[475]: Using default interface naming scheme 'v252'. Feb 9 18:31:59.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:59.229591 systemd[1]: Started systemd-udevd.service. Feb 9 18:31:59.236840 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:31:59.253870 dracut-pre-trigger[496]: rd.md=0: removing MD RAID activation Feb 9 18:31:59.283740 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:31:59.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:59.295065 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:31:59.339411 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:31:59.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:59.394674 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 18:31:59.408817 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 18:31:59.408862 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 18:31:59.430809 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 18:31:59.430856 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 18:31:59.440013 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 18:31:59.449677 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 18:31:59.449724 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 18:31:59.459667 kernel: scsi host1: storvsc_host_t Feb 9 18:31:59.459845 kernel: scsi host0: storvsc_host_t Feb 9 18:31:59.470034 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 18:31:59.477684 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 18:31:59.496467 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 18:31:59.496690 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:31:59.503477 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 18:31:59.503726 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 18:31:59.512877 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 18:31:59.513017 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 18:31:59.513099 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 18:31:59.521669 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 18:31:59.521821 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:31:59.532015 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 18:31:59.549677 kernel: hv_netvsc 0022487c-8a6c-0022-487c-8a6c0022487c eth0: VF slot 1 added Feb 9 18:31:59.571119 kernel: hv_vmbus: registering driver hv_pci Feb 9 18:31:59.571170 kernel: hv_pci f92412c7-2f8d-41c8-adc9-ccdd9d35a907: PCI VMBus probing: Using version 0x10004 Feb 9 18:31:59.588481 kernel: hv_pci f92412c7-2f8d-41c8-adc9-ccdd9d35a907: PCI host bridge to bus 2f8d:00 Feb 9 18:31:59.588719 kernel: pci_bus 2f8d:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 18:31:59.595838 kernel: pci_bus 2f8d:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 18:31:59.610126 kernel: pci 2f8d:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 18:31:59.622890 kernel: pci 2f8d:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:31:59.643882 kernel: pci 2f8d:00:02.0: enabling Extended Tags Feb 9 18:31:59.668448 kernel: pci 2f8d:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2f8d:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 18:31:59.691246 kernel: pci_bus 2f8d:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 18:31:59.691416 kernel: pci 2f8d:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:31:59.733686 kernel: mlx5_core 2f8d:00:02.0: firmware version: 16.30.1284 Feb 9 18:31:59.895679 kernel: mlx5_core 2f8d:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 18:31:59.948011 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:31:59.972665 kernel: hv_netvsc 0022487c-8a6c-0022-487c-8a6c0022487c eth0: VF registering: eth1 Feb 9 18:31:59.972826 kernel: mlx5_core 2f8d:00:02.0 eth1: joined to eth0 Feb 9 18:31:59.985672 kernel: mlx5_core 2f8d:00:02.0 enP12173s1: renamed from eth1 Feb 9 18:31:59.985856 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (532) Feb 9 18:32:00.007759 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:32:00.200607 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:32:00.211343 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:32:00.222107 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:32:00.241707 systemd[1]: Starting disk-uuid.service... Feb 9 18:32:01.276264 disk-uuid[601]: The operation has completed successfully. Feb 9 18:32:01.281975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:32:01.334371 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:32:01.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.334457 systemd[1]: Finished disk-uuid.service. Feb 9 18:32:01.344217 systemd[1]: Starting verity-setup.service... Feb 9 18:32:01.394681 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:32:01.770420 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:32:01.776674 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:32:01.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.784467 systemd[1]: Finished verity-setup.service. Feb 9 18:32:01.838668 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:32:01.838749 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:32:01.843503 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:32:01.844280 systemd[1]: Starting ignition-setup.service... Feb 9 18:32:01.852511 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:32:01.894169 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:32:01.894230 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:32:01.899409 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:32:01.937434 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:32:01.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.949000 audit: BPF prog-id=9 op=LOAD Feb 9 18:32:01.950743 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:01.979979 systemd-networkd[868]: lo: Link UP Feb 9 18:32:01.979990 systemd-networkd[868]: lo: Gained carrier Feb 9 18:32:01.980773 systemd-networkd[868]: Enumeration completed Feb 9 18:32:02.021613 kernel: kauditd_printk_skb: 12 callbacks suppressed Feb 9 18:32:02.021642 kernel: audit: type=1130 audit(1707503521.993:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.984047 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:01.984674 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:02.022902 systemd[1]: Reached target network.target. Feb 9 18:32:02.051349 systemd[1]: Starting iscsiuio.service... Feb 9 18:32:02.061232 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:32:02.061626 systemd[1]: Started iscsiuio.service. Feb 9 18:32:02.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.093420 systemd[1]: Starting iscsid.service... Feb 9 18:32:02.121393 kernel: audit: type=1130 audit(1707503522.074:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.121418 kernel: audit: type=1130 audit(1707503522.101:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.097696 systemd[1]: Started iscsid.service. Feb 9 18:32:02.132259 iscsid[880]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:32:02.132259 iscsid[880]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 18:32:02.132259 iscsid[880]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:32:02.132259 iscsid[880]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:32:02.132259 iscsid[880]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:32:02.132259 iscsid[880]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:32:02.132259 iscsid[880]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:32:02.260005 kernel: audit: type=1130 audit(1707503522.165:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.260039 kernel: mlx5_core 2f8d:00:02.0 enP12173s1: Link up Feb 9 18:32:02.165000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.121786 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:32:02.288464 kernel: audit: type=1130 audit(1707503522.266:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.161282 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:32:02.166536 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:32:02.195781 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:32:02.321446 kernel: hv_netvsc 0022487c-8a6c-0022-487c-8a6c0022487c eth0: Data path switched to VF: enP12173s1 Feb 9 18:32:02.321589 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:32:02.215797 systemd[1]: Reached target remote-fs.target. Feb 9 18:32:02.226802 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:32:02.262332 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:32:02.320440 systemd-networkd[868]: enP12173s1: Link UP Feb 9 18:32:02.320522 systemd-networkd[868]: eth0: Link UP Feb 9 18:32:02.320643 systemd-networkd[868]: eth0: Gained carrier Feb 9 18:32:02.341024 systemd-networkd[868]: enP12173s1: Gained carrier Feb 9 18:32:02.351866 systemd-networkd[868]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:02.377237 systemd[1]: Finished ignition-setup.service. Feb 9 18:32:02.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:02.408112 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:32:02.419177 kernel: audit: type=1130 audit(1707503522.382:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:03.466770 systemd-networkd[868]: eth0: Gained IPv6LL Feb 9 18:32:06.021383 ignition[895]: Ignition 2.14.0 Feb 9 18:32:06.024953 ignition[895]: Stage: fetch-offline Feb 9 18:32:06.025031 ignition[895]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:06.025058 ignition[895]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:06.129786 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:06.129981 ignition[895]: parsed url from cmdline: "" Feb 9 18:32:06.129984 ignition[895]: no config URL provided Feb 9 18:32:06.129990 ignition[895]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:32:06.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.137566 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:32:06.130002 ignition[895]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:32:06.193838 kernel: audit: type=1130 audit(1707503526.147:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.148817 systemd[1]: Starting ignition-fetch.service... Feb 9 18:32:06.130008 ignition[895]: failed to fetch config: resource requires networking Feb 9 18:32:06.130237 ignition[895]: Ignition finished successfully Feb 9 18:32:06.178484 ignition[901]: Ignition 2.14.0 Feb 9 18:32:06.178491 ignition[901]: Stage: fetch Feb 9 18:32:06.178599 ignition[901]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:06.178622 ignition[901]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:06.185857 ignition[901]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:06.186149 ignition[901]: parsed url from cmdline: "" Feb 9 18:32:06.186153 ignition[901]: no config URL provided Feb 9 18:32:06.186159 ignition[901]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:32:06.186172 ignition[901]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:32:06.186202 ignition[901]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 18:32:06.277801 ignition[901]: GET result: OK Feb 9 18:32:06.277893 ignition[901]: config has been read from IMDS userdata Feb 9 18:32:06.277962 ignition[901]: parsing config with SHA512: 3c0c8617d3d89343cb74b3a5b9cfb2c783fc4ee7d3cb7456e97db93a94890fe2a8a044073d10d6ab0d58148640a8dc257ebcc400924291c7d760e9a7e106cbf1 Feb 9 18:32:06.311873 unknown[901]: fetched base config from "system" Feb 9 18:32:06.311884 unknown[901]: fetched base config from "system" Feb 9 18:32:06.312550 ignition[901]: fetch: fetch complete Feb 9 18:32:06.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.311890 unknown[901]: fetched user config from "azure" Feb 9 18:32:06.355040 kernel: audit: type=1130 audit(1707503526.325:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.312556 ignition[901]: fetch: fetch passed Feb 9 18:32:06.318726 systemd[1]: Finished ignition-fetch.service. Feb 9 18:32:06.312597 ignition[901]: Ignition finished successfully Feb 9 18:32:06.327235 systemd[1]: Starting ignition-kargs.service... Feb 9 18:32:06.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.362644 ignition[907]: Ignition 2.14.0 Feb 9 18:32:06.415269 kernel: audit: type=1130 audit(1707503526.382:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.374582 systemd[1]: Finished ignition-kargs.service. Feb 9 18:32:06.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.362666 ignition[907]: Stage: kargs Feb 9 18:32:06.384178 systemd[1]: Starting ignition-disks.service... Feb 9 18:32:06.462741 kernel: audit: type=1130 audit(1707503526.420:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.362780 ignition[907]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:06.415198 systemd[1]: Finished ignition-disks.service. Feb 9 18:32:06.362800 ignition[907]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:06.421218 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:32:06.365763 ignition[907]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:06.450223 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:32:06.372856 ignition[907]: kargs: kargs passed Feb 9 18:32:06.458210 systemd[1]: Reached target local-fs.target. Feb 9 18:32:06.372912 ignition[907]: Ignition finished successfully Feb 9 18:32:06.467619 systemd[1]: Reached target sysinit.target. Feb 9 18:32:06.395203 ignition[913]: Ignition 2.14.0 Feb 9 18:32:06.480241 systemd[1]: Reached target basic.target. Feb 9 18:32:06.395210 ignition[913]: Stage: disks Feb 9 18:32:06.494962 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:32:06.395325 ignition[913]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:06.395352 ignition[913]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:06.399016 ignition[913]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:06.403588 ignition[913]: disks: disks passed Feb 9 18:32:06.403677 ignition[913]: Ignition finished successfully Feb 9 18:32:06.570155 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:32:06.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.590076 systemd-fsck[921]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 18:32:06.587613 systemd[1]: Mounting sysroot.mount... Feb 9 18:32:06.615686 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:32:06.616761 systemd[1]: Mounted sysroot.mount. Feb 9 18:32:06.621184 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:32:06.664442 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:32:06.675193 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 18:32:06.687118 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:32:06.687167 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:32:06.704859 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:32:06.792872 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:32:06.799296 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:32:06.823672 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (932) Feb 9 18:32:06.831581 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:32:06.843886 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:32:06.843910 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:32:06.848800 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:32:06.852481 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:32:06.863786 initrd-setup-root[963]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:32:06.873052 initrd-setup-root[971]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:32:06.897329 initrd-setup-root[979]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:32:07.409674 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:32:07.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:07.427827 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:32:07.427872 kernel: audit: type=1130 audit(1707503527.414:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:07.424054 systemd[1]: Starting ignition-mount.service... Feb 9 18:32:07.447412 systemd[1]: Starting sysroot-boot.service... Feb 9 18:32:07.462163 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:32:07.462467 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:32:07.483009 ignition[999]: INFO : Ignition 2.14.0 Feb 9 18:32:07.487586 ignition[999]: INFO : Stage: mount Feb 9 18:32:07.487586 ignition[999]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:07.487586 ignition[999]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:07.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:07.489074 systemd[1]: Finished sysroot-boot.service. Feb 9 18:32:07.562557 kernel: audit: type=1130 audit(1707503527.505:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:07.562580 kernel: audit: type=1130 audit(1707503527.541:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:07.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:07.562623 ignition[999]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:07.562623 ignition[999]: INFO : mount: mount passed Feb 9 18:32:07.562623 ignition[999]: INFO : Ignition finished successfully Feb 9 18:32:07.506523 systemd[1]: Finished ignition-mount.service. Feb 9 18:32:08.025467 coreos-metadata[931]: Feb 09 18:32:08.025 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 18:32:08.036224 coreos-metadata[931]: Feb 09 18:32:08.036 INFO Fetch successful Feb 9 18:32:08.068916 coreos-metadata[931]: Feb 09 18:32:08.068 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 18:32:08.082955 coreos-metadata[931]: Feb 09 18:32:08.082 INFO Fetch successful Feb 9 18:32:08.089820 coreos-metadata[931]: Feb 09 18:32:08.088 INFO wrote hostname ci-3510.3.2-a-37f6c6cc7b to /sysroot/etc/hostname Feb 9 18:32:08.101532 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 18:32:08.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:08.140697 kernel: audit: type=1130 audit(1707503528.107:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:08.133957 systemd[1]: Starting ignition-files.service... Feb 9 18:32:08.148841 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:32:08.170675 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1010) Feb 9 18:32:08.185648 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:32:08.185672 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:32:08.185682 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:32:08.195703 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:32:08.214051 ignition[1029]: INFO : Ignition 2.14.0 Feb 9 18:32:08.214051 ignition[1029]: INFO : Stage: files Feb 9 18:32:08.227069 ignition[1029]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:08.227069 ignition[1029]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:08.227069 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:08.227069 ignition[1029]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:32:08.227069 ignition[1029]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:32:08.227069 ignition[1029]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:32:08.301592 ignition[1029]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:32:08.311039 ignition[1029]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:32:08.320500 ignition[1029]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:32:08.320500 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:32:08.320500 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:32:08.311194 unknown[1029]: wrote ssh authorized keys file for user: core Feb 9 18:32:08.786237 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:32:08.943493 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:32:08.962244 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:32:08.962244 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:32:08.962244 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:32:09.126259 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:32:09.464727 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:32:09.476327 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:32:09.476327 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:32:09.850778 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 18:32:10.053801 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:32:10.072837 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:32:10.072837 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:32:10.072837 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:32:10.234980 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:32:10.546884 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 18:32:10.565922 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:32:10.565922 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:32:10.565922 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:32:10.631648 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:32:10.923299 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:32:10.944895 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:32:10.944895 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:32:10.944895 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:32:10.987537 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:32:11.656953 ignition[1029]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:32:11.675703 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:32:11.675703 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:32:11.675703 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:32:11.675703 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:32:11.675703 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Feb 9 18:32:12.062712 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 18:32:12.462148 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:32:12.472520 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:32:12.650755 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1031) Feb 9 18:32:12.650777 kernel: audit: type=1130 audit(1707503532.576:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1707930179" Feb 9 18:32:12.650834 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1707930179": device or resource busy Feb 9 18:32:12.650834 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1707930179", trying btrfs: device or resource busy Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1707930179" Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1707930179" Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem1707930179" Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem1707930179" Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927228488" Feb 9 18:32:12.650834 ignition[1029]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927228488": device or resource busy Feb 9 18:32:12.650834 ignition[1029]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3927228488", trying btrfs: device or resource busy Feb 9 18:32:12.650834 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927228488" Feb 9 18:32:12.961700 kernel: audit: type=1130 audit(1707503532.655:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.961729 kernel: audit: type=1131 audit(1707503532.655:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.961767 kernel: audit: type=1130 audit(1707503532.736:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.961782 kernel: audit: type=1130 audit(1707503532.833:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.961793 kernel: audit: type=1131 audit(1707503532.833:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.516596 systemd[1]: mnt-oem1707930179.mount: Deactivated successfully. Feb 9 18:32:12.968797 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3927228488" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3927228488" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3927228488" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1a): [started] processing unit "prepare-critools.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1a): [finished] processing unit "prepare-critools.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1c): [started] processing unit "prepare-helm.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1c): [finished] processing unit "prepare-helm.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1e): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:32:12.968797 ignition[1029]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:32:13.289600 kernel: audit: type=1130 audit(1707503532.984:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.289631 kernel: audit: type=1131 audit(1707503533.069:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.289641 kernel: audit: type=1131 audit(1707503533.258:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.540060 systemd[1]: mnt-oem3927228488.mount: Deactivated successfully. Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(1e): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(22): [started] setting preset to enabled for "waagent.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(22): [finished] setting preset to enabled for "waagent.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:32:13.299274 ignition[1029]: INFO : files: files passed Feb 9 18:32:13.299274 ignition[1029]: INFO : Ignition finished successfully Feb 9 18:32:13.493066 kernel: audit: type=1131 audit(1707503533.305:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.493287 initrd-setup-root-after-ignition[1055]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:32:13.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.561580 systemd[1]: Finished ignition-files.service. Feb 9 18:32:13.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.606629 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:32:12.612168 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:32:13.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.612993 systemd[1]: Starting ignition-quench.service... Feb 9 18:32:12.639491 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:32:13.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.639580 systemd[1]: Finished ignition-quench.service. Feb 9 18:32:12.730868 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:32:13.571861 ignition[1068]: INFO : Ignition 2.14.0 Feb 9 18:32:13.571861 ignition[1068]: INFO : Stage: umount Feb 9 18:32:13.571861 ignition[1068]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:13.571861 ignition[1068]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:13.571861 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:13.571861 ignition[1068]: INFO : umount: umount passed Feb 9 18:32:13.571861 ignition[1068]: INFO : Ignition finished successfully Feb 9 18:32:13.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.737180 systemd[1]: Reached target ignition-complete.target. Feb 9 18:32:12.775567 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:32:13.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.808235 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:32:13.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.676000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:32:12.808347 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:32:12.833577 systemd[1]: Reached target initrd-fs.target. Feb 9 18:32:13.697000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.887067 systemd[1]: Reached target initrd.target. Feb 9 18:32:12.904019 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:32:12.904894 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:32:13.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.968995 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:32:13.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:12.986040 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:32:13.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.024745 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:32:13.031164 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:32:13.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.047038 systemd[1]: Stopped target timers.target. Feb 9 18:32:13.058462 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:32:13.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.058530 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:32:13.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.070030 systemd[1]: Stopped target initrd.target. Feb 9 18:32:13.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.099435 systemd[1]: Stopped target basic.target. Feb 9 18:32:13.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.112532 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:32:13.850139 kernel: hv_netvsc 0022487c-8a6c-0022-487c-8a6c0022487c eth0: Data path switched from VF: enP12173s1 Feb 9 18:32:13.843000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.128711 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:32:13.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.145843 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:32:13.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.162750 systemd[1]: Stopped target remote-fs.target. Feb 9 18:32:13.174608 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:32:13.186732 systemd[1]: Stopped target sysinit.target. Feb 9 18:32:13.202533 systemd[1]: Stopped target local-fs.target. Feb 9 18:32:13.218424 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:32:13.230383 systemd[1]: Stopped target swap.target. Feb 9 18:32:13.242253 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:32:13.242319 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:32:13.258940 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:32:13.294063 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:32:13.294113 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:32:13.306073 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:32:13.306109 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:32:13.336529 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:32:13.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:13.336569 systemd[1]: Stopped ignition-files.service. Feb 9 18:32:13.348849 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 18:32:13.348888 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 18:32:13.371543 systemd[1]: Stopping ignition-mount.service... Feb 9 18:32:13.394568 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:32:13.411611 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:32:13.411716 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:32:13.424507 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:32:13.424557 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:32:14.029130 systemd-journald[276]: Received SIGTERM from PID 1 (systemd). Feb 9 18:32:14.029181 iscsid[880]: iscsid shutting down. Feb 9 18:32:13.446102 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:32:13.446212 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:32:13.469051 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:32:13.469228 systemd[1]: Stopped ignition-mount.service. Feb 9 18:32:13.489077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:32:13.489397 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:32:13.489441 systemd[1]: Stopped ignition-disks.service. Feb 9 18:32:13.497540 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:32:13.497583 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:32:13.511575 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:32:13.511618 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:32:13.526711 systemd[1]: Stopped target network.target. Feb 9 18:32:13.542557 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:32:13.542614 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:32:13.552362 systemd[1]: Stopped target paths.target. Feb 9 18:32:13.561961 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:32:13.571791 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:32:13.576903 systemd[1]: Stopped target slices.target. Feb 9 18:32:13.585008 systemd[1]: Stopped target sockets.target. Feb 9 18:32:13.592897 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:32:13.592925 systemd[1]: Closed iscsid.socket. Feb 9 18:32:13.603441 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:32:13.603464 systemd[1]: Closed iscsiuio.socket. Feb 9 18:32:13.623480 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:32:13.623521 systemd[1]: Stopped ignition-setup.service. Feb 9 18:32:13.635154 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:32:13.643542 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:32:13.648227 systemd-networkd[868]: eth0: DHCPv6 lease lost Feb 9 18:32:14.029000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:32:13.653739 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:32:13.653838 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:32:13.663239 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:32:13.663322 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:32:13.668028 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:32:13.668140 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:32:13.677529 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:32:13.677567 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:32:13.687632 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:32:13.687687 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:32:13.698284 systemd[1]: Stopping network-cleanup.service... Feb 9 18:32:13.711251 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:32:13.711322 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:32:13.724465 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:32:13.724516 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:32:13.737902 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:32:13.737943 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:32:13.743506 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:32:13.756101 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:32:13.756616 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:32:13.756762 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:32:13.764992 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:32:13.765033 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:32:13.774843 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:32:13.774876 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:32:13.780481 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:32:13.780527 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:32:13.789551 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:32:13.789588 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:32:13.800177 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:32:13.800211 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:32:13.810012 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:32:13.818740 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 18:32:13.818795 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 18:32:13.824335 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:32:13.824382 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:32:13.844154 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:32:13.844202 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:32:13.857232 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 18:32:13.857717 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:32:13.857819 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:32:13.944891 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:32:13.944999 systemd[1]: Stopped network-cleanup.service. Feb 9 18:32:13.950714 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:32:13.962447 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:32:13.982672 systemd[1]: Switching root. Feb 9 18:32:14.031179 systemd-journald[276]: Journal stopped Feb 9 18:32:35.673910 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:32:35.673932 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:32:35.673943 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:32:35.673953 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:32:35.673961 kernel: SELinux: policy capability open_perms=1 Feb 9 18:32:35.673969 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:32:35.673979 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:32:35.673987 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:32:35.673998 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:32:35.674006 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:32:35.674016 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:32:35.674025 systemd[1]: Successfully loaded SELinux policy in 281.875ms. Feb 9 18:32:35.674036 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 31.950ms. Feb 9 18:32:35.674046 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:32:35.674058 systemd[1]: Detected virtualization microsoft. Feb 9 18:32:35.674067 systemd[1]: Detected architecture arm64. Feb 9 18:32:35.674076 systemd[1]: Detected first boot. Feb 9 18:32:35.674086 systemd[1]: Hostname set to . Feb 9 18:32:35.674095 systemd[1]: Initializing machine ID from random generator. Feb 9 18:32:35.674104 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:32:35.674112 kernel: kauditd_printk_skb: 39 callbacks suppressed Feb 9 18:32:35.674122 kernel: audit: type=1400 audit(1707503538.591:87): avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:32:35.674134 kernel: audit: type=1300 audit(1707503538.591:87): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458c4 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:35.674144 kernel: audit: type=1327 audit(1707503538.591:87): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:35.674153 kernel: audit: type=1400 audit(1707503538.602:88): avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:32:35.674163 kernel: audit: type=1300 audit(1707503538.602:88): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001459a9 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:35.674172 kernel: audit: type=1307 audit(1707503538.602:88): cwd="/" Feb 9 18:32:35.674182 kernel: audit: type=1302 audit(1707503538.602:88): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:35.674192 kernel: audit: type=1302 audit(1707503538.602:88): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:35.674202 kernel: audit: type=1327 audit(1707503538.602:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:35.674212 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:32:35.674221 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:32:35.674231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:32:35.674242 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:32:35.674253 kernel: audit: type=1334 audit(1707503554.837:89): prog-id=12 op=LOAD Feb 9 18:32:35.674261 kernel: audit: type=1334 audit(1707503554.837:90): prog-id=3 op=UNLOAD Feb 9 18:32:35.674270 kernel: audit: type=1334 audit(1707503554.844:91): prog-id=13 op=LOAD Feb 9 18:32:35.674279 kernel: audit: type=1334 audit(1707503554.851:92): prog-id=14 op=LOAD Feb 9 18:32:35.674288 kernel: audit: type=1334 audit(1707503554.851:93): prog-id=4 op=UNLOAD Feb 9 18:32:35.674297 kernel: audit: type=1334 audit(1707503554.851:94): prog-id=5 op=UNLOAD Feb 9 18:32:35.674307 kernel: audit: type=1334 audit(1707503554.857:95): prog-id=15 op=LOAD Feb 9 18:32:35.674316 kernel: audit: type=1334 audit(1707503554.857:96): prog-id=12 op=UNLOAD Feb 9 18:32:35.674326 kernel: audit: type=1334 audit(1707503554.863:97): prog-id=16 op=LOAD Feb 9 18:32:35.674336 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:32:35.674345 kernel: audit: type=1334 audit(1707503554.870:98): prog-id=17 op=LOAD Feb 9 18:32:35.674354 systemd[1]: Stopped iscsiuio.service. Feb 9 18:32:35.674364 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:32:35.674373 systemd[1]: Stopped iscsid.service. Feb 9 18:32:35.674383 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 18:32:35.674394 systemd[1]: Stopped initrd-switch-root.service. Feb 9 18:32:35.674405 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 18:32:35.674415 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:32:35.674424 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:32:35.674434 systemd[1]: Created slice system-getty.slice. Feb 9 18:32:35.674444 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:32:35.674453 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:32:35.674463 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:32:35.674473 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:32:35.674483 systemd[1]: Created slice user.slice. Feb 9 18:32:35.674493 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:32:35.674502 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:32:35.674512 systemd[1]: Set up automount boot.automount. Feb 9 18:32:35.674521 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:32:35.674531 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 18:32:35.674541 systemd[1]: Stopped target initrd-fs.target. Feb 9 18:32:35.674550 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 18:32:35.674561 systemd[1]: Reached target integritysetup.target. Feb 9 18:32:35.674571 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:32:35.674580 systemd[1]: Reached target remote-fs.target. Feb 9 18:32:35.674590 systemd[1]: Reached target slices.target. Feb 9 18:32:35.674600 systemd[1]: Reached target swap.target. Feb 9 18:32:35.674611 systemd[1]: Reached target torcx.target. Feb 9 18:32:35.674622 systemd[1]: Reached target veritysetup.target. Feb 9 18:32:35.674632 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:32:35.674642 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:32:35.674663 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:32:35.674673 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:32:35.674683 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:32:35.674693 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:32:35.674702 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:32:35.674714 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:32:35.674724 systemd[1]: Mounting media.mount... Feb 9 18:32:35.674733 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:32:35.674743 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:32:35.674752 systemd[1]: Mounting tmp.mount... Feb 9 18:32:35.674762 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:32:35.674772 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:32:35.674782 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:32:35.674791 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:32:35.674802 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:32:35.674812 systemd[1]: Starting modprobe@drm.service... Feb 9 18:32:35.674823 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:32:35.674834 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:32:35.674844 systemd[1]: Starting modprobe@loop.service... Feb 9 18:32:35.674854 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:32:35.674864 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 18:32:35.674874 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 18:32:35.674885 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 18:32:35.674895 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 18:32:35.674904 systemd[1]: Stopped systemd-journald.service. Feb 9 18:32:35.674914 systemd[1]: systemd-journald.service: Consumed 3.505s CPU time. Feb 9 18:32:35.674924 kernel: fuse: init (API version 7.34) Feb 9 18:32:35.674933 systemd[1]: Starting systemd-journald.service... Feb 9 18:32:35.674942 kernel: loop: module loaded Feb 9 18:32:35.674951 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:32:35.674961 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:32:35.674972 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:32:35.674982 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:32:35.674992 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 18:32:35.675001 systemd[1]: Stopped verity-setup.service. Feb 9 18:32:35.675011 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:32:35.675021 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:32:35.675031 systemd[1]: Mounted media.mount. Feb 9 18:32:35.675041 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:32:35.675054 systemd-journald[1203]: Journal started Feb 9 18:32:35.675096 systemd-journald[1203]: Runtime Journal (/run/log/journal/80eba5fa408848fb855b9e6f244d4e35) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:32:16.486000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 18:32:17.229000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:32:17.229000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:32:17.229000 audit: BPF prog-id=10 op=LOAD Feb 9 18:32:17.229000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:32:17.229000 audit: BPF prog-id=11 op=LOAD Feb 9 18:32:17.229000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:32:18.591000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:32:18.591000 audit[1101]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458c4 a1=40000c6df8 a2=40000cd0c0 a3=32 items=0 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:18.591000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:18.602000 audit[1101]: AVC avc: denied { associate } for pid=1101 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:32:18.602000 audit[1101]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001459a9 a2=1ed a3=0 items=2 ppid=1084 pid=1101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:18.602000 audit: CWD cwd="/" Feb 9 18:32:18.602000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.602000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:18.602000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:34.837000 audit: BPF prog-id=12 op=LOAD Feb 9 18:32:34.837000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:32:34.844000 audit: BPF prog-id=13 op=LOAD Feb 9 18:32:34.851000 audit: BPF prog-id=14 op=LOAD Feb 9 18:32:34.851000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:32:34.851000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:32:34.857000 audit: BPF prog-id=15 op=LOAD Feb 9 18:32:34.857000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:32:34.863000 audit: BPF prog-id=16 op=LOAD Feb 9 18:32:34.870000 audit: BPF prog-id=17 op=LOAD Feb 9 18:32:34.870000 audit: BPF prog-id=13 op=UNLOAD Feb 9 18:32:34.870000 audit: BPF prog-id=14 op=UNLOAD Feb 9 18:32:34.876000 audit: BPF prog-id=18 op=LOAD Feb 9 18:32:34.876000 audit: BPF prog-id=15 op=UNLOAD Feb 9 18:32:34.882000 audit: BPF prog-id=19 op=LOAD Feb 9 18:32:34.888000 audit: BPF prog-id=20 op=LOAD Feb 9 18:32:34.888000 audit: BPF prog-id=16 op=UNLOAD Feb 9 18:32:34.888000 audit: BPF prog-id=17 op=UNLOAD Feb 9 18:32:34.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:34.913000 audit: BPF prog-id=18 op=UNLOAD Feb 9 18:32:34.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:34.932000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:34.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:34.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.536000 audit: BPF prog-id=21 op=LOAD Feb 9 18:32:35.536000 audit: BPF prog-id=22 op=LOAD Feb 9 18:32:35.536000 audit: BPF prog-id=23 op=LOAD Feb 9 18:32:35.536000 audit: BPF prog-id=19 op=UNLOAD Feb 9 18:32:35.536000 audit: BPF prog-id=20 op=UNLOAD Feb 9 18:32:35.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.671000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:32:35.671000 audit[1203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe1490ec0 a2=4000 a3=1 items=0 ppid=1 pid=1203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:35.671000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:32:34.836669 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:32:18.526815 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:32:34.889919 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 18:32:18.557143 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:32:34.890266 systemd[1]: systemd-journald.service: Consumed 3.505s CPU time. Feb 9 18:32:18.557178 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:32:18.557228 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 18:32:18.557237 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 18:32:18.557270 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 18:32:18.557282 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 18:32:18.557477 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 18:32:18.557509 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 18:32:18.557521 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 18:32:18.571768 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 18:32:18.571806 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 18:32:18.571826 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 18:32:18.571841 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 18:32:18.571860 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 18:32:18.571874 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 18:32:31.089273 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:31Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:31.089535 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:31Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:31.089646 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:31Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:31.089819 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:31Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 18:32:31.089868 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:31Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 18:32:31.089921 /usr/lib/systemd/system-generators/torcx-generator[1101]: time="2024-02-09T18:32:31Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 18:32:35.687269 systemd[1]: Started systemd-journald.service. Feb 9 18:32:35.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.688317 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:32:35.693781 systemd[1]: Mounted tmp.mount. Feb 9 18:32:35.698062 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:32:35.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.703475 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:32:35.703599 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:32:35.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.709514 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:32:35.709637 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:32:35.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.715292 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:32:35.715412 systemd[1]: Finished modprobe@drm.service. Feb 9 18:32:35.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.720603 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:32:35.720738 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:32:35.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.726859 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:32:35.726982 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:32:35.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.734398 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:32:35.734523 systemd[1]: Finished modprobe@loop.service. Feb 9 18:32:35.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.739878 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:32:35.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.746155 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:32:35.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.752268 systemd[1]: Reached target network-pre.target. Feb 9 18:32:35.758964 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:32:35.765552 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:32:35.770310 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:32:35.771930 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:32:35.778798 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:32:35.783811 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:32:35.784999 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:32:35.790201 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:32:35.794750 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:32:35.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.801083 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:32:35.807249 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:32:35.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:35.812985 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:32:35.819628 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:32:35.826798 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:32:35.833846 systemd-journald[1203]: Time spent on flushing to /var/log/journal/80eba5fa408848fb855b9e6f244d4e35 is 22.983ms for 1142 entries. Feb 9 18:32:35.833846 systemd-journald[1203]: System Journal (/var/log/journal/80eba5fa408848fb855b9e6f244d4e35) is 8.0M, max 2.6G, 2.6G free. Feb 9 18:32:37.441430 systemd-journald[1203]: Received client request to flush runtime journal. Feb 9 18:32:35.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:36.804000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:36.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:37.441798 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 18:32:35.880506 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:32:37.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:36.799683 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:32:36.806295 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:32:36.814042 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:32:36.819184 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:32:37.442388 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:32:37.454948 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:32:37.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:38.846122 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:32:38.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:38.855679 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:32:39.708478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:32:39.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:39.714000 audit: BPF prog-id=24 op=LOAD Feb 9 18:32:39.714000 audit: BPF prog-id=25 op=LOAD Feb 9 18:32:39.714000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:32:39.714000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:32:39.715905 systemd[1]: Starting systemd-udevd.service... Feb 9 18:32:39.735295 systemd-udevd[1226]: Using default interface naming scheme 'v252'. Feb 9 18:32:39.958982 systemd[1]: Started systemd-udevd.service. Feb 9 18:32:39.995417 kernel: kauditd_printk_skb: 56 callbacks suppressed Feb 9 18:32:39.995522 kernel: audit: type=1130 audit(1707503559.967:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:39.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:39.996566 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:39.994000 audit: BPF prog-id=26 op=LOAD Feb 9 18:32:40.012672 kernel: audit: type=1334 audit(1707503559.994:154): prog-id=26 op=LOAD Feb 9 18:32:40.032267 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Feb 9 18:32:40.059000 audit: BPF prog-id=27 op=LOAD Feb 9 18:32:40.061409 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:32:40.060000 audit: BPF prog-id=28 op=LOAD Feb 9 18:32:40.083785 kernel: audit: type=1334 audit(1707503560.059:155): prog-id=27 op=LOAD Feb 9 18:32:40.083871 kernel: hv_vmbus: registering driver hv_balloon Feb 9 18:32:40.083894 kernel: audit: type=1334 audit(1707503560.060:156): prog-id=28 op=LOAD Feb 9 18:32:40.060000 audit: BPF prog-id=29 op=LOAD Feb 9 18:32:40.094259 kernel: audit: type=1334 audit(1707503560.060:157): prog-id=29 op=LOAD Feb 9 18:32:40.062000 audit[1240]: AVC avc: denied { confidentiality } for pid=1240 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:32:40.117102 kernel: audit: type=1400 audit(1707503560.062:158): avc: denied { confidentiality } for pid=1240 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:32:40.124682 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:32:40.140618 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 18:32:40.140737 kernel: hv_vmbus: registering driver hv_utils Feb 9 18:32:40.153819 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 18:32:40.153909 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 18:32:40.153932 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 18:32:40.271751 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 18:32:40.271839 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 18:32:40.282189 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 18:32:40.282391 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 18:32:40.295610 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 18:32:40.302770 kernel: Console: switching to colour dummy device 80x25 Feb 9 18:32:40.305720 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:32:40.062000 audit[1240]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad615e540 a1=aa2c a2=ffffacd824b0 a3=aaaad60b9010 items=12 ppid=1226 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:40.344645 systemd[1]: Started systemd-userdbd.service. Feb 9 18:32:40.062000 audit: CWD cwd="/" Feb 9 18:32:40.354501 kernel: audit: type=1300 audit(1707503560.062:158): arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaad615e540 a1=aa2c a2=ffffacd824b0 a3=aaaad60b9010 items=12 ppid=1226 pid=1240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:40.354580 kernel: audit: type=1307 audit(1707503560.062:158): cwd="/" Feb 9 18:32:40.062000 audit: PATH item=0 name=(null) inode=7189 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.372445 kernel: audit: type=1302 audit(1707503560.062:158): item=0 name=(null) inode=7189 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=1 name=(null) inode=10663 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.396088 kernel: audit: type=1302 audit(1707503560.062:158): item=1 name=(null) inode=10663 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=2 name=(null) inode=10663 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=3 name=(null) inode=10664 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=4 name=(null) inode=10663 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=5 name=(null) inode=10665 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=6 name=(null) inode=10663 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=7 name=(null) inode=10666 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=8 name=(null) inode=10663 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=9 name=(null) inode=10667 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=10 name=(null) inode=10663 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PATH item=11 name=(null) inode=10668 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:40.062000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:32:40.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:40.635741 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1230) Feb 9 18:32:40.649703 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:32:40.658069 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:32:40.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:40.664740 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:32:40.889653 lvm[1301]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:32:40.912673 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:32:40.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:40.918650 systemd[1]: Reached target cryptsetup.target. Feb 9 18:32:40.925127 systemd[1]: Starting lvm2-activation.service... Feb 9 18:32:40.929162 lvm[1303]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:32:40.936234 systemd-networkd[1247]: lo: Link UP Feb 9 18:32:40.936498 systemd-networkd[1247]: lo: Gained carrier Feb 9 18:32:40.937051 systemd-networkd[1247]: Enumeration completed Feb 9 18:32:40.937302 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:40.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:40.945046 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:32:40.961655 systemd[1]: Finished lvm2-activation.service. Feb 9 18:32:40.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:40.966923 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:32:40.971960 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:32:40.971988 systemd[1]: Reached target local-fs.target. Feb 9 18:32:40.977246 systemd-networkd[1247]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:40.977980 systemd[1]: Reached target machines.target. Feb 9 18:32:40.984481 systemd[1]: Starting ldconfig.service... Feb 9 18:32:40.993266 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:32:40.993372 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:40.994800 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:32:41.001783 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:32:41.009070 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:32:41.014092 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:32:41.014151 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:32:41.015232 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:32:41.044423 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:32:41.058714 kernel: mlx5_core 2f8d:00:02.0 enP12173s1: Link up Feb 9 18:32:41.088295 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1306 (bootctl) Feb 9 18:32:41.089758 kernel: hv_netvsc 0022487c-8a6c-0022-487c-8a6c0022487c eth0: Data path switched to VF: enP12173s1 Feb 9 18:32:41.089559 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:32:41.096889 systemd-networkd[1247]: enP12173s1: Link UP Feb 9 18:32:41.097304 systemd-networkd[1247]: eth0: Link UP Feb 9 18:32:41.097617 systemd-networkd[1247]: eth0: Gained carrier Feb 9 18:32:41.104880 systemd-networkd[1247]: enP12173s1: Gained carrier Feb 9 18:32:41.125808 systemd-networkd[1247]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:41.209565 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:32:41.211101 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:32:41.215068 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:32:41.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:41.803445 systemd-fsck[1314]: fsck.fat 4.2 (2021-01-31) Feb 9 18:32:41.803445 systemd-fsck[1314]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 18:32:41.805410 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:32:41.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:41.814231 systemd[1]: Mounting boot.mount... Feb 9 18:32:41.825440 systemd[1]: Mounted boot.mount. Feb 9 18:32:41.835277 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:32:41.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:41.929647 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:32:41.930252 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:32:41.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.351544 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:32:42.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.358839 systemd[1]: Starting audit-rules.service... Feb 9 18:32:42.364452 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:32:42.370774 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:32:42.376000 audit: BPF prog-id=30 op=LOAD Feb 9 18:32:42.378547 systemd[1]: Starting systemd-resolved.service... Feb 9 18:32:42.382000 audit: BPF prog-id=31 op=LOAD Feb 9 18:32:42.385299 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:32:42.390971 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:32:42.425000 audit[1326]: SYSTEM_BOOT pid=1326 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.433266 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:32:42.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.439730 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:32:42.441148 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:32:42.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.494054 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:32:42.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.499983 systemd[1]: Reached target time-set.target. Feb 9 18:32:42.521755 systemd-resolved[1323]: Positive Trust Anchors: Feb 9 18:32:42.522033 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:32:42.522114 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:32:42.550075 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:32:42.556241 systemd-resolved[1323]: Using system hostname 'ci-3510.3.2-a-37f6c6cc7b'. Feb 9 18:32:42.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.558170 systemd[1]: Started systemd-resolved.service. Feb 9 18:32:42.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:42.563513 systemd[1]: Reached target network.target. Feb 9 18:32:42.568476 systemd[1]: Reached target nss-lookup.target. Feb 9 18:32:42.762027 systemd-timesyncd[1325]: Contacted time server 137.190.2.4:123 (0.flatcar.pool.ntp.org). Feb 9 18:32:42.762099 systemd-timesyncd[1325]: Initial clock synchronization to Fri 2024-02-09 18:32:42.742901 UTC. Feb 9 18:32:42.807909 augenrules[1341]: No rules Feb 9 18:32:42.806000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:32:42.806000 audit[1341]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd459a250 a2=420 a3=0 items=0 ppid=1320 pid=1341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:42.806000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:32:42.808607 systemd[1]: Finished audit-rules.service. Feb 9 18:32:42.995877 systemd-networkd[1247]: eth0: Gained IPv6LL Feb 9 18:32:42.997555 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:32:43.003527 systemd[1]: Reached target network-online.target. Feb 9 18:32:49.016096 ldconfig[1305]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:32:49.027133 systemd[1]: Finished ldconfig.service. Feb 9 18:32:49.036270 systemd[1]: Starting systemd-update-done.service... Feb 9 18:32:49.075089 systemd[1]: Finished systemd-update-done.service. Feb 9 18:32:49.082319 systemd[1]: Reached target sysinit.target. Feb 9 18:32:49.089183 systemd[1]: Started motdgen.path. Feb 9 18:32:49.095102 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:32:49.104835 systemd[1]: Started logrotate.timer. Feb 9 18:32:49.111172 systemd[1]: Started mdadm.timer. Feb 9 18:32:49.117085 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:32:49.124096 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:32:49.124129 systemd[1]: Reached target paths.target. Feb 9 18:32:49.130609 systemd[1]: Reached target timers.target. Feb 9 18:32:49.137441 systemd[1]: Listening on dbus.socket. Feb 9 18:32:49.145003 systemd[1]: Starting docker.socket... Feb 9 18:32:49.175228 systemd[1]: Listening on sshd.socket. Feb 9 18:32:49.181849 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:49.182375 systemd[1]: Listening on docker.socket. Feb 9 18:32:49.188809 systemd[1]: Reached target sockets.target. Feb 9 18:32:49.194818 systemd[1]: Reached target basic.target. Feb 9 18:32:49.200485 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:32:49.200513 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:32:49.201611 systemd[1]: Starting containerd.service... Feb 9 18:32:49.209014 systemd[1]: Starting dbus.service... Feb 9 18:32:49.214931 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:32:49.223129 systemd[1]: Starting extend-filesystems.service... Feb 9 18:32:49.229775 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:32:49.230880 systemd[1]: Starting motdgen.service... Feb 9 18:32:49.239410 systemd[1]: Started nvidia.service. Feb 9 18:32:49.247715 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:32:49.254700 systemd[1]: Starting prepare-critools.service... Feb 9 18:32:49.261594 systemd[1]: Starting prepare-helm.service... Feb 9 18:32:49.267969 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:32:49.275016 systemd[1]: Starting sshd-keygen.service... Feb 9 18:32:49.278025 jq[1351]: false Feb 9 18:32:49.282475 systemd[1]: Starting systemd-logind.service... Feb 9 18:32:49.288769 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:49.288829 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:32:49.289244 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 18:32:49.289955 systemd[1]: Starting update-engine.service... Feb 9 18:32:49.296822 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:32:49.309337 jq[1371]: true Feb 9 18:32:49.312792 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:32:49.312972 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:32:49.315908 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:32:49.316074 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:32:49.459591 extend-filesystems[1352]: Found sda Feb 9 18:32:49.466805 extend-filesystems[1352]: Found sda1 Feb 9 18:32:49.466805 extend-filesystems[1352]: Found sda2 Feb 9 18:32:49.466805 extend-filesystems[1352]: Found sda3 Feb 9 18:32:49.466805 extend-filesystems[1352]: Found usr Feb 9 18:32:49.466805 extend-filesystems[1352]: Found sda4 Feb 9 18:32:49.466805 extend-filesystems[1352]: Found sda6 Feb 9 18:32:49.466805 extend-filesystems[1352]: Found sda7 Feb 9 18:32:49.466805 extend-filesystems[1352]: Found sda9 Feb 9 18:32:49.466805 extend-filesystems[1352]: Checking size of /dev/sda9 Feb 9 18:32:49.597302 jq[1379]: true Feb 9 18:32:49.465709 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:32:49.604122 tar[1374]: crictl Feb 9 18:32:49.604307 env[1380]: time="2024-02-09T18:32:49.588324504Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:32:49.604454 extend-filesystems[1352]: Old size kept for /dev/sda9 Feb 9 18:32:49.604454 extend-filesystems[1352]: Found sr0 Feb 9 18:32:49.653049 tar[1373]: ./ Feb 9 18:32:49.653049 tar[1373]: ./macvlan Feb 9 18:32:49.653049 tar[1373]: ./static Feb 9 18:32:49.658426 tar[1375]: linux-arm64/helm Feb 9 18:32:49.465890 systemd[1]: Finished motdgen.service. Feb 9 18:32:49.561823 systemd-logind[1366]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 18:32:49.562023 systemd-logind[1366]: New seat seat0. Feb 9 18:32:49.589028 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:32:49.589184 systemd[1]: Finished extend-filesystems.service. Feb 9 18:32:49.666024 bash[1403]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:32:49.666317 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:32:49.708285 env[1380]: time="2024-02-09T18:32:49.708225663Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:32:49.708401 env[1380]: time="2024-02-09T18:32:49.708378900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:49.711238 env[1380]: time="2024-02-09T18:32:49.711089352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:49.711238 env[1380]: time="2024-02-09T18:32:49.711142510Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:49.713883 env[1380]: time="2024-02-09T18:32:49.713842571Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:49.713883 env[1380]: time="2024-02-09T18:32:49.713876623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:49.713973 env[1380]: time="2024-02-09T18:32:49.713891891Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:32:49.713973 env[1380]: time="2024-02-09T18:32:49.713901723Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:49.714027 env[1380]: time="2024-02-09T18:32:49.713998884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:49.714228 env[1380]: time="2024-02-09T18:32:49.714202200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:49.714356 env[1380]: time="2024-02-09T18:32:49.714332415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:49.714356 env[1380]: time="2024-02-09T18:32:49.714352679Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:32:49.714424 env[1380]: time="2024-02-09T18:32:49.714405676Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:32:49.714424 env[1380]: time="2024-02-09T18:32:49.714420904Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:32:49.719315 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 18:32:49.729567 tar[1373]: ./vlan Feb 9 18:32:49.731029 env[1380]: time="2024-02-09T18:32:49.730991172Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:32:49.731029 env[1380]: time="2024-02-09T18:32:49.731034377Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:32:49.731140 env[1380]: time="2024-02-09T18:32:49.731048565Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:32:49.731140 env[1380]: time="2024-02-09T18:32:49.731081219Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731140 env[1380]: time="2024-02-09T18:32:49.731097366Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731140 env[1380]: time="2024-02-09T18:32:49.731111395Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731140 env[1380]: time="2024-02-09T18:32:49.731123785Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731475 env[1380]: time="2024-02-09T18:32:49.731452759Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731475 env[1380]: time="2024-02-09T18:32:49.731475541Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731587 env[1380]: time="2024-02-09T18:32:49.731490049Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731587 env[1380]: time="2024-02-09T18:32:49.731502879Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.731587 env[1380]: time="2024-02-09T18:32:49.731516987Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:32:49.731651 env[1380]: time="2024-02-09T18:32:49.731640328Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:32:49.731765 env[1380]: time="2024-02-09T18:32:49.731736730Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:32:49.732002 env[1380]: time="2024-02-09T18:32:49.731981533Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:32:49.732044 env[1380]: time="2024-02-09T18:32:49.732010030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732044 env[1380]: time="2024-02-09T18:32:49.732023059Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:32:49.732113 env[1380]: time="2024-02-09T18:32:49.732063426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732113 env[1380]: time="2024-02-09T18:32:49.732076576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732113 env[1380]: time="2024-02-09T18:32:49.732088446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732113 env[1380]: time="2024-02-09T18:32:49.732100596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732199 env[1380]: time="2024-02-09T18:32:49.732111947Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732199 env[1380]: time="2024-02-09T18:32:49.732124697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732199 env[1380]: time="2024-02-09T18:32:49.732136208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732199 env[1380]: time="2024-02-09T18:32:49.732147638Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732199 env[1380]: time="2024-02-09T18:32:49.732160508Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:32:49.732300 env[1380]: time="2024-02-09T18:32:49.732269220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732300 env[1380]: time="2024-02-09T18:32:49.732284688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732300 env[1380]: time="2024-02-09T18:32:49.732297078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732357 env[1380]: time="2024-02-09T18:32:49.732308708Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:32:49.732357 env[1380]: time="2024-02-09T18:32:49.732321898Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:32:49.732357 env[1380]: time="2024-02-09T18:32:49.732332689Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:32:49.732357 env[1380]: time="2024-02-09T18:32:49.732350195Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:32:49.732439 env[1380]: time="2024-02-09T18:32:49.732386126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:32:49.732644 env[1380]: time="2024-02-09T18:32:49.732574574Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:32:49.732644 env[1380]: time="2024-02-09T18:32:49.732634246Z" level=info msg="Connect containerd service" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.732669297Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733356063Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733467653Z" level=info msg="Start subscribing containerd event" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733503664Z" level=info msg="Start recovering state" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733558220Z" level=info msg="Start event monitor" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733572569Z" level=info msg="Start snapshots syncer" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733581122Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733587596Z" level=info msg="Start streaming server" Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733869769Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:32:49.748292 env[1380]: time="2024-02-09T18:32:49.733905020Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:32:49.742394 dbus-daemon[1350]: [system] SELinux support is enabled Feb 9 18:32:49.734017 systemd[1]: Started containerd.service. Feb 9 18:32:49.742521 systemd[1]: Started dbus.service. Feb 9 18:32:49.748859 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:32:49.748894 systemd[1]: Reached target system-config.target. Feb 9 18:32:49.752854 env[1380]: time="2024-02-09T18:32:49.752814480Z" level=info msg="containerd successfully booted in 0.168400s" Feb 9 18:32:49.758450 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:32:49.758475 systemd[1]: Reached target user-config.target. Feb 9 18:32:49.767633 systemd[1]: Started systemd-logind.service. Feb 9 18:32:49.767757 dbus-daemon[1350]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 18:32:49.840477 tar[1373]: ./portmap Feb 9 18:32:49.914045 tar[1373]: ./host-local Feb 9 18:32:49.958286 tar[1373]: ./vrf Feb 9 18:32:50.007625 tar[1373]: ./bridge Feb 9 18:32:50.086230 tar[1373]: ./tuning Feb 9 18:32:50.149012 tar[1373]: ./firewall Feb 9 18:32:50.206358 tar[1373]: ./host-device Feb 9 18:32:50.238124 update_engine[1369]: I0209 18:32:50.223878 1369 main.cc:92] Flatcar Update Engine starting Feb 9 18:32:50.279019 tar[1373]: ./sbr Feb 9 18:32:50.325725 tar[1373]: ./loopback Feb 9 18:32:50.331855 systemd[1]: Started update-engine.service. Feb 9 18:32:50.332255 update_engine[1369]: I0209 18:32:50.331902 1369 update_check_scheduler.cc:74] Next update check in 9m17s Feb 9 18:32:50.340589 systemd[1]: Started locksmithd.service. Feb 9 18:32:50.386470 tar[1373]: ./dhcp Feb 9 18:32:50.457201 systemd[1]: Finished prepare-critools.service. Feb 9 18:32:50.461414 tar[1375]: linux-arm64/LICENSE Feb 9 18:32:50.461571 tar[1375]: linux-arm64/README.md Feb 9 18:32:50.467448 systemd[1]: Finished prepare-helm.service. Feb 9 18:32:50.506753 tar[1373]: ./ptp Feb 9 18:32:50.534760 tar[1373]: ./ipvlan Feb 9 18:32:50.562011 tar[1373]: ./bandwidth Feb 9 18:32:50.649241 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:32:51.184835 sshd_keygen[1370]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:32:51.202192 systemd[1]: Finished sshd-keygen.service. Feb 9 18:32:51.208418 systemd[1]: Starting issuegen.service... Feb 9 18:32:51.213274 systemd[1]: Started waagent.service. Feb 9 18:32:51.218032 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:32:51.218192 systemd[1]: Finished issuegen.service. Feb 9 18:32:51.223989 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:32:51.257772 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:32:51.264348 systemd[1]: Started getty@tty1.service. Feb 9 18:32:51.269975 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:32:51.275309 systemd[1]: Reached target getty.target. Feb 9 18:32:51.279757 systemd[1]: Reached target multi-user.target. Feb 9 18:32:51.285860 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:32:51.298225 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:32:51.298381 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:32:51.304073 systemd[1]: Startup finished in 800ms (kernel) + 18.358s (initrd) + 35.196s (userspace) = 54.355s. Feb 9 18:32:51.929250 login[1480]: pam_lastlog(login:session): file /var/log/lastlog is locked/read Feb 9 18:32:51.930935 login[1481]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:32:51.954809 systemd[1]: Created slice user-500.slice. Feb 9 18:32:51.955517 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:32:51.955907 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:32:51.958836 systemd-logind[1366]: New session 1 of user core. Feb 9 18:32:51.992733 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:32:51.994117 systemd[1]: Starting user@500.service... Feb 9 18:32:52.012629 (systemd)[1484]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:32:52.236598 systemd[1484]: Queued start job for default target default.target. Feb 9 18:32:52.237132 systemd[1484]: Reached target paths.target. Feb 9 18:32:52.237152 systemd[1484]: Reached target sockets.target. Feb 9 18:32:52.237162 systemd[1484]: Reached target timers.target. Feb 9 18:32:52.237172 systemd[1484]: Reached target basic.target. Feb 9 18:32:52.237216 systemd[1484]: Reached target default.target. Feb 9 18:32:52.237239 systemd[1484]: Startup finished in 219ms. Feb 9 18:32:52.237281 systemd[1]: Started user@500.service. Feb 9 18:32:52.238180 systemd[1]: Started session-1.scope. Feb 9 18:32:52.929598 login[1480]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:32:52.933719 systemd[1]: Started session-2.scope. Feb 9 18:32:52.934072 systemd-logind[1366]: New session 2 of user core. Feb 9 18:32:57.675533 waagent[1477]: 2024-02-09T18:32:57.675431Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 18:32:57.682836 waagent[1477]: 2024-02-09T18:32:57.682755Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 18:32:57.687925 waagent[1477]: 2024-02-09T18:32:57.687863Z INFO Daemon Daemon Python: 3.9.16 Feb 9 18:32:57.694926 waagent[1477]: 2024-02-09T18:32:57.694826Z INFO Daemon Daemon Run daemon Feb 9 18:32:57.701193 waagent[1477]: 2024-02-09T18:32:57.701114Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 18:32:57.720521 waagent[1477]: 2024-02-09T18:32:57.720380Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:32:57.737007 waagent[1477]: 2024-02-09T18:32:57.736871Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:32:57.748198 waagent[1477]: 2024-02-09T18:32:57.748112Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:32:57.753734 waagent[1477]: 2024-02-09T18:32:57.753651Z INFO Daemon Daemon Using waagent for provisioning Feb 9 18:32:57.760140 waagent[1477]: 2024-02-09T18:32:57.760071Z INFO Daemon Daemon Activate resource disk Feb 9 18:32:57.765159 waagent[1477]: 2024-02-09T18:32:57.765093Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 18:32:57.780137 waagent[1477]: 2024-02-09T18:32:57.780056Z INFO Daemon Daemon Found device: None Feb 9 18:32:57.785126 waagent[1477]: 2024-02-09T18:32:57.785051Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 18:32:57.794265 waagent[1477]: 2024-02-09T18:32:57.794192Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 18:32:57.807215 waagent[1477]: 2024-02-09T18:32:57.807144Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:32:57.813701 waagent[1477]: 2024-02-09T18:32:57.813630Z INFO Daemon Daemon Running default provisioning handler Feb 9 18:32:57.827066 waagent[1477]: 2024-02-09T18:32:57.826924Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:32:57.843596 waagent[1477]: 2024-02-09T18:32:57.843461Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:32:57.854568 waagent[1477]: 2024-02-09T18:32:57.854489Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:32:57.859925 waagent[1477]: 2024-02-09T18:32:57.859860Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 18:32:57.962493 waagent[1477]: 2024-02-09T18:32:57.962292Z INFO Daemon Daemon Successfully mounted dvd Feb 9 18:32:58.046895 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 18:32:58.075871 waagent[1477]: 2024-02-09T18:32:58.075733Z INFO Daemon Daemon Detect protocol endpoint Feb 9 18:32:58.081223 waagent[1477]: 2024-02-09T18:32:58.081151Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:32:58.087505 waagent[1477]: 2024-02-09T18:32:58.087438Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 18:32:58.094467 waagent[1477]: 2024-02-09T18:32:58.094408Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 18:32:58.100301 waagent[1477]: 2024-02-09T18:32:58.100240Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 18:32:58.105761 waagent[1477]: 2024-02-09T18:32:58.105702Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 18:32:58.197768 waagent[1477]: 2024-02-09T18:32:58.197700Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 18:32:58.205961 waagent[1477]: 2024-02-09T18:32:58.205916Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 18:32:58.211617 waagent[1477]: 2024-02-09T18:32:58.211555Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 18:32:58.712816 waagent[1477]: 2024-02-09T18:32:58.712645Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 18:32:58.729089 waagent[1477]: 2024-02-09T18:32:58.729015Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 18:32:58.736478 waagent[1477]: 2024-02-09T18:32:58.736390Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 18:32:58.813429 waagent[1477]: 2024-02-09T18:32:58.813303Z INFO Daemon Daemon Found private key matching thumbprint 0D9BF437DDEDD4ADD1D2331CF8A120D853028092 Feb 9 18:32:58.822402 waagent[1477]: 2024-02-09T18:32:58.822313Z INFO Daemon Daemon Certificate with thumbprint 44389390C33E5ADBB9E2B197918B16FAD3636C2F has no matching private key. Feb 9 18:32:58.832701 waagent[1477]: 2024-02-09T18:32:58.832602Z INFO Daemon Daemon Fetch goal state completed Feb 9 18:32:59.547094 waagent[1477]: 2024-02-09T18:32:59.547034Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 8afb68bd-4184-4964-a0f1-becaafdeafed New eTag: 2510886188388803554] Feb 9 18:32:59.559986 waagent[1477]: 2024-02-09T18:32:59.559894Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:32:59.577068 waagent[1477]: 2024-02-09T18:32:59.577005Z INFO Daemon Daemon Starting provisioning Feb 9 18:32:59.583699 waagent[1477]: 2024-02-09T18:32:59.583606Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 18:32:59.590121 waagent[1477]: 2024-02-09T18:32:59.590047Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-37f6c6cc7b] Feb 9 18:32:59.652771 waagent[1477]: 2024-02-09T18:32:59.652618Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-37f6c6cc7b] Feb 9 18:32:59.660320 waagent[1477]: 2024-02-09T18:32:59.660239Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 18:32:59.667788 waagent[1477]: 2024-02-09T18:32:59.667724Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 18:32:59.684395 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 18:32:59.684549 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 18:32:59.684606 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 18:32:59.684849 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:32:59.688729 systemd-networkd[1247]: eth0: DHCPv6 lease lost Feb 9 18:32:59.690224 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:32:59.690395 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:32:59.692262 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:59.719084 systemd-networkd[1533]: enP12173s1: Link UP Feb 9 18:32:59.719094 systemd-networkd[1533]: enP12173s1: Gained carrier Feb 9 18:32:59.719975 systemd-networkd[1533]: eth0: Link UP Feb 9 18:32:59.719985 systemd-networkd[1533]: eth0: Gained carrier Feb 9 18:32:59.720295 systemd-networkd[1533]: lo: Link UP Feb 9 18:32:59.720303 systemd-networkd[1533]: lo: Gained carrier Feb 9 18:32:59.720527 systemd-networkd[1533]: eth0: Gained IPv6LL Feb 9 18:32:59.721010 systemd-networkd[1533]: Enumeration completed Feb 9 18:32:59.721115 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:59.721880 systemd-networkd[1533]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:59.722732 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:32:59.726408 waagent[1477]: 2024-02-09T18:32:59.726265Z INFO Daemon Daemon Create user account if not exists Feb 9 18:32:59.734219 waagent[1477]: 2024-02-09T18:32:59.734132Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 18:32:59.740836 waagent[1477]: 2024-02-09T18:32:59.740755Z INFO Daemon Daemon Configure sudoer Feb 9 18:32:59.746500 waagent[1477]: 2024-02-09T18:32:59.746431Z INFO Daemon Daemon Configure sshd Feb 9 18:32:59.751301 waagent[1477]: 2024-02-09T18:32:59.751241Z INFO Daemon Daemon Deploy ssh public key. Feb 9 18:32:59.758796 systemd-networkd[1533]: eth0: DHCPv4 address 10.200.20.32/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:59.761333 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:33:00.966198 waagent[1477]: 2024-02-09T18:33:00.966112Z INFO Daemon Daemon Provisioning complete Feb 9 18:33:00.988157 waagent[1477]: 2024-02-09T18:33:00.988087Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 18:33:00.994819 waagent[1477]: 2024-02-09T18:33:00.994753Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 18:33:01.006233 waagent[1477]: 2024-02-09T18:33:01.006166Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 18:33:01.301350 waagent[1542]: 2024-02-09T18:33:01.301256Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 18:33:01.302438 waagent[1542]: 2024-02-09T18:33:01.302383Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:33:01.302674 waagent[1542]: 2024-02-09T18:33:01.302628Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:33:01.314953 waagent[1542]: 2024-02-09T18:33:01.314875Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 18:33:01.315275 waagent[1542]: 2024-02-09T18:33:01.315226Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 18:33:01.383966 waagent[1542]: 2024-02-09T18:33:01.383835Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0D9BF437DDEDD4ADD1D2331CF8A120D853028092 Feb 9 18:33:01.384340 waagent[1542]: 2024-02-09T18:33:01.384289Z INFO ExtHandler ExtHandler Certificate with thumbprint 44389390C33E5ADBB9E2B197918B16FAD3636C2F has no matching private key. Feb 9 18:33:01.384662 waagent[1542]: 2024-02-09T18:33:01.384614Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 18:33:01.397787 waagent[1542]: 2024-02-09T18:33:01.397732Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 14d20ce3-f9ab-4fd8-a1c9-489798f14dfd New eTag: 2510886188388803554] Feb 9 18:33:01.398563 waagent[1542]: 2024-02-09T18:33:01.398505Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:33:01.472556 waagent[1542]: 2024-02-09T18:33:01.472405Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:33:01.483270 waagent[1542]: 2024-02-09T18:33:01.483176Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1542 Feb 9 18:33:01.487160 waagent[1542]: 2024-02-09T18:33:01.487099Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:33:01.488629 waagent[1542]: 2024-02-09T18:33:01.488574Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:33:01.601591 waagent[1542]: 2024-02-09T18:33:01.601481Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:33:01.602154 waagent[1542]: 2024-02-09T18:33:01.602097Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:33:01.609594 waagent[1542]: 2024-02-09T18:33:01.609543Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:33:01.610229 waagent[1542]: 2024-02-09T18:33:01.610175Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:33:01.611434 waagent[1542]: 2024-02-09T18:33:01.611374Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 18:33:01.612885 waagent[1542]: 2024-02-09T18:33:01.612818Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:33:01.613169 waagent[1542]: 2024-02-09T18:33:01.613099Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:33:01.613446 waagent[1542]: 2024-02-09T18:33:01.613384Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:33:01.614341 waagent[1542]: 2024-02-09T18:33:01.614265Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:33:01.614651 waagent[1542]: 2024-02-09T18:33:01.614589Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:33:01.614651 waagent[1542]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:33:01.614651 waagent[1542]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:33:01.614651 waagent[1542]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:33:01.614651 waagent[1542]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:33:01.614651 waagent[1542]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:33:01.614651 waagent[1542]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:33:01.616794 waagent[1542]: 2024-02-09T18:33:01.616606Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:33:01.617149 waagent[1542]: 2024-02-09T18:33:01.617073Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:33:01.617632 waagent[1542]: 2024-02-09T18:33:01.617567Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:33:01.618623 waagent[1542]: 2024-02-09T18:33:01.618548Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:33:01.618811 waagent[1542]: 2024-02-09T18:33:01.618758Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:33:01.618932 waagent[1542]: 2024-02-09T18:33:01.618888Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:33:01.619836 waagent[1542]: 2024-02-09T18:33:01.619773Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:33:01.619990 waagent[1542]: 2024-02-09T18:33:01.619923Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:33:01.620723 waagent[1542]: 2024-02-09T18:33:01.620617Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:33:01.620938 waagent[1542]: 2024-02-09T18:33:01.620868Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:33:01.621073 waagent[1542]: 2024-02-09T18:33:01.621016Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:33:01.632405 waagent[1542]: 2024-02-09T18:33:01.632338Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 18:33:01.633142 waagent[1542]: 2024-02-09T18:33:01.633096Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:33:01.634130 waagent[1542]: 2024-02-09T18:33:01.634077Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 18:33:01.657830 waagent[1542]: 2024-02-09T18:33:01.657696Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1533' Feb 9 18:33:01.695194 waagent[1542]: 2024-02-09T18:33:01.695131Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 18:33:01.756212 waagent[1542]: 2024-02-09T18:33:01.756085Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:33:01.756212 waagent[1542]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:33:01.756212 waagent[1542]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:33:01.756212 waagent[1542]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:8a:6c brd ff:ff:ff:ff:ff:ff Feb 9 18:33:01.756212 waagent[1542]: 3: enP12173s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:8a:6c brd ff:ff:ff:ff:ff:ff\ altname enP12173p0s2 Feb 9 18:33:01.756212 waagent[1542]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:33:01.756212 waagent[1542]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:33:01.756212 waagent[1542]: 2: eth0 inet 10.200.20.32/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:33:01.756212 waagent[1542]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:33:01.756212 waagent[1542]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:33:01.756212 waagent[1542]: 2: eth0 inet6 fe80::222:48ff:fe7c:8a6c/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:33:01.873067 waagent[1542]: 2024-02-09T18:33:01.872939Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 18:33:02.009699 waagent[1477]: 2024-02-09T18:33:02.009566Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 18:33:02.013266 waagent[1477]: 2024-02-09T18:33:02.013215Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 18:33:03.122625 waagent[1571]: 2024-02-09T18:33:03.122523Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 18:33:03.123310 waagent[1571]: 2024-02-09T18:33:03.123241Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 18:33:03.123436 waagent[1571]: 2024-02-09T18:33:03.123391Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 18:33:03.131036 waagent[1571]: 2024-02-09T18:33:03.130924Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:33:03.131421 waagent[1571]: 2024-02-09T18:33:03.131365Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:33:03.131566 waagent[1571]: 2024-02-09T18:33:03.131518Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:33:03.143768 waagent[1571]: 2024-02-09T18:33:03.143698Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 18:33:03.152002 waagent[1571]: 2024-02-09T18:33:03.151950Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 18:33:03.152994 waagent[1571]: 2024-02-09T18:33:03.152935Z INFO ExtHandler Feb 9 18:33:03.153142 waagent[1571]: 2024-02-09T18:33:03.153094Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5ecefbd9-c62c-42da-bbe5-556406bfa9f3 eTag: 2510886188388803554 source: Fabric] Feb 9 18:33:03.153887 waagent[1571]: 2024-02-09T18:33:03.153828Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 18:33:03.155105 waagent[1571]: 2024-02-09T18:33:03.155043Z INFO ExtHandler Feb 9 18:33:03.155237 waagent[1571]: 2024-02-09T18:33:03.155191Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 18:33:03.161243 waagent[1571]: 2024-02-09T18:33:03.161196Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 18:33:03.161692 waagent[1571]: 2024-02-09T18:33:03.161636Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:33:03.181430 waagent[1571]: 2024-02-09T18:33:03.181365Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 18:33:03.251137 waagent[1571]: 2024-02-09T18:33:03.250998Z INFO ExtHandler Downloaded certificate {'thumbprint': '44389390C33E5ADBB9E2B197918B16FAD3636C2F', 'hasPrivateKey': False} Feb 9 18:33:03.252203 waagent[1571]: 2024-02-09T18:33:03.252142Z INFO ExtHandler Downloaded certificate {'thumbprint': '0D9BF437DDEDD4ADD1D2331CF8A120D853028092', 'hasPrivateKey': True} Feb 9 18:33:03.253244 waagent[1571]: 2024-02-09T18:33:03.253184Z INFO ExtHandler Fetch goal state completed Feb 9 18:33:03.277686 waagent[1571]: 2024-02-09T18:33:03.277621Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1571 Feb 9 18:33:03.281139 waagent[1571]: 2024-02-09T18:33:03.281077Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:33:03.282590 waagent[1571]: 2024-02-09T18:33:03.282532Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:33:03.287251 waagent[1571]: 2024-02-09T18:33:03.287191Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:33:03.287634 waagent[1571]: 2024-02-09T18:33:03.287574Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:33:03.294928 waagent[1571]: 2024-02-09T18:33:03.294864Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:33:03.295388 waagent[1571]: 2024-02-09T18:33:03.295329Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:33:03.300883 waagent[1571]: 2024-02-09T18:33:03.300760Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 18:33:03.304393 waagent[1571]: 2024-02-09T18:33:03.304331Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 18:33:03.305897 waagent[1571]: 2024-02-09T18:33:03.305822Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:33:03.306797 waagent[1571]: 2024-02-09T18:33:03.306725Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:33:03.307376 waagent[1571]: 2024-02-09T18:33:03.307313Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:33:03.307643 waagent[1571]: 2024-02-09T18:33:03.307592Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:33:03.307917 waagent[1571]: 2024-02-09T18:33:03.307865Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:33:03.308135 waagent[1571]: 2024-02-09T18:33:03.308089Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:33:03.308799 waagent[1571]: 2024-02-09T18:33:03.308740Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:33:03.309260 waagent[1571]: 2024-02-09T18:33:03.309206Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:33:03.309488 waagent[1571]: 2024-02-09T18:33:03.309441Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:33:03.309715 waagent[1571]: 2024-02-09T18:33:03.309644Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:33:03.309835 waagent[1571]: 2024-02-09T18:33:03.309767Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:33:03.310168 waagent[1571]: 2024-02-09T18:33:03.310098Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:33:03.310769 waagent[1571]: 2024-02-09T18:33:03.310646Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:33:03.310839 waagent[1571]: 2024-02-09T18:33:03.310780Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:33:03.313005 waagent[1571]: 2024-02-09T18:33:03.312937Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:33:03.313005 waagent[1571]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:33:03.313005 waagent[1571]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:33:03.313005 waagent[1571]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:33:03.313005 waagent[1571]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:33:03.313005 waagent[1571]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:33:03.313005 waagent[1571]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:33:03.315124 waagent[1571]: 2024-02-09T18:33:03.314965Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:33:03.335086 waagent[1571]: 2024-02-09T18:33:03.335006Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 18:33:03.335406 waagent[1571]: 2024-02-09T18:33:03.335347Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 18:33:03.359893 waagent[1571]: 2024-02-09T18:33:03.359813Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:33:03.359893 waagent[1571]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:33:03.359893 waagent[1571]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:33:03.359893 waagent[1571]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:8a:6c brd ff:ff:ff:ff:ff:ff Feb 9 18:33:03.359893 waagent[1571]: 3: enP12173s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7c:8a:6c brd ff:ff:ff:ff:ff:ff\ altname enP12173p0s2 Feb 9 18:33:03.359893 waagent[1571]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:33:03.359893 waagent[1571]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:33:03.359893 waagent[1571]: 2: eth0 inet 10.200.20.32/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:33:03.359893 waagent[1571]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:33:03.359893 waagent[1571]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:33:03.359893 waagent[1571]: 2: eth0 inet6 fe80::222:48ff:fe7c:8a6c/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:33:03.367750 waagent[1571]: 2024-02-09T18:33:03.367659Z INFO ExtHandler ExtHandler Feb 9 18:33:03.368626 waagent[1571]: 2024-02-09T18:33:03.368558Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 74d6d6f5-41fe-4016-9569-79b0848277bb correlation c7ab5445-2a8a-48e5-b85e-6af5e79dd6f5 created: 2024-02-09T18:31:11.254654Z] Feb 9 18:33:03.372138 waagent[1571]: 2024-02-09T18:33:03.372037Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 18:33:03.374256 waagent[1571]: 2024-02-09T18:33:03.374171Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Feb 9 18:33:03.394987 waagent[1571]: 2024-02-09T18:33:03.394921Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 18:33:03.431831 waagent[1571]: 2024-02-09T18:33:03.431740Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 3AACE7EE-FFF7-4ED9-A651-DBA432D105A3;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 18:33:03.596318 waagent[1571]: 2024-02-09T18:33:03.596182Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 18:33:03.596318 waagent[1571]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:33:03.596318 waagent[1571]: pkts bytes target prot opt in out source destination Feb 9 18:33:03.596318 waagent[1571]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:33:03.596318 waagent[1571]: pkts bytes target prot opt in out source destination Feb 9 18:33:03.596318 waagent[1571]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:33:03.596318 waagent[1571]: pkts bytes target prot opt in out source destination Feb 9 18:33:03.596318 waagent[1571]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:33:03.596318 waagent[1571]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:33:03.596318 waagent[1571]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:33:03.603382 waagent[1571]: 2024-02-09T18:33:03.603258Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 18:33:03.603382 waagent[1571]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:33:03.603382 waagent[1571]: pkts bytes target prot opt in out source destination Feb 9 18:33:03.603382 waagent[1571]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:33:03.603382 waagent[1571]: pkts bytes target prot opt in out source destination Feb 9 18:33:03.603382 waagent[1571]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:33:03.603382 waagent[1571]: pkts bytes target prot opt in out source destination Feb 9 18:33:03.603382 waagent[1571]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:33:03.603382 waagent[1571]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:33:03.603382 waagent[1571]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:33:03.603902 waagent[1571]: 2024-02-09T18:33:03.603849Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 18:33:28.412128 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 18:33:35.984589 update_engine[1369]: I0209 18:33:35.984242 1369 update_attempter.cc:509] Updating boot flags... Feb 9 18:33:48.196328 systemd[1]: Created slice system-sshd.slice. Feb 9 18:33:48.197558 systemd[1]: Started sshd@0-10.200.20.32:22-10.200.12.6:54466.service. Feb 9 18:33:48.845916 sshd[1691]: Accepted publickey for core from 10.200.12.6 port 54466 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:48.864036 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:48.867637 systemd-logind[1366]: New session 3 of user core. Feb 9 18:33:48.868429 systemd[1]: Started session-3.scope. Feb 9 18:33:49.214433 systemd[1]: Started sshd@1-10.200.20.32:22-10.200.12.6:54476.service. Feb 9 18:33:49.628392 sshd[1696]: Accepted publickey for core from 10.200.12.6 port 54476 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:49.629656 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:49.633402 systemd-logind[1366]: New session 4 of user core. Feb 9 18:33:49.633836 systemd[1]: Started session-4.scope. Feb 9 18:33:49.930349 sshd[1696]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:49.932717 systemd[1]: sshd@1-10.200.20.32:22-10.200.12.6:54476.service: Deactivated successfully. Feb 9 18:33:49.933388 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:33:49.933926 systemd-logind[1366]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:33:49.934846 systemd-logind[1366]: Removed session 4. Feb 9 18:33:49.998669 systemd[1]: Started sshd@2-10.200.20.32:22-10.200.12.6:54480.service. Feb 9 18:33:50.411842 sshd[1702]: Accepted publickey for core from 10.200.12.6 port 54480 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:50.413096 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:50.416837 systemd-logind[1366]: New session 5 of user core. Feb 9 18:33:50.417234 systemd[1]: Started session-5.scope. Feb 9 18:33:50.710989 sshd[1702]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:50.713381 systemd[1]: sshd@2-10.200.20.32:22-10.200.12.6:54480.service: Deactivated successfully. Feb 9 18:33:50.714080 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:33:50.714548 systemd-logind[1366]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:33:50.715200 systemd-logind[1366]: Removed session 5. Feb 9 18:33:50.789046 systemd[1]: Started sshd@3-10.200.20.32:22-10.200.12.6:54484.service. Feb 9 18:33:51.235187 sshd[1708]: Accepted publickey for core from 10.200.12.6 port 54484 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:51.236410 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:51.240136 systemd-logind[1366]: New session 6 of user core. Feb 9 18:33:51.240562 systemd[1]: Started session-6.scope. Feb 9 18:33:51.558544 sshd[1708]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:51.561073 systemd[1]: sshd@3-10.200.20.32:22-10.200.12.6:54484.service: Deactivated successfully. Feb 9 18:33:51.561739 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:33:51.562264 systemd-logind[1366]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:33:51.563090 systemd-logind[1366]: Removed session 6. Feb 9 18:33:51.627291 systemd[1]: Started sshd@4-10.200.20.32:22-10.200.12.6:54494.service. Feb 9 18:33:52.040439 sshd[1714]: Accepted publickey for core from 10.200.12.6 port 54494 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:52.041990 sshd[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:52.046022 systemd[1]: Started session-7.scope. Feb 9 18:33:52.046438 systemd-logind[1366]: New session 7 of user core. Feb 9 18:33:52.561345 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:33:52.561545 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:33:53.267820 systemd[1]: Starting docker.service... Feb 9 18:33:53.320034 env[1732]: time="2024-02-09T18:33:53.319981074Z" level=info msg="Starting up" Feb 9 18:33:53.321263 env[1732]: time="2024-02-09T18:33:53.321238182Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:33:53.321263 env[1732]: time="2024-02-09T18:33:53.321257787Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:33:53.321366 env[1732]: time="2024-02-09T18:33:53.321278952Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:33:53.321366 env[1732]: time="2024-02-09T18:33:53.321288635Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:33:53.322922 env[1732]: time="2024-02-09T18:33:53.322892948Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:33:53.322922 env[1732]: time="2024-02-09T18:33:53.322917714Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:33:53.323016 env[1732]: time="2024-02-09T18:33:53.322933358Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:33:53.323016 env[1732]: time="2024-02-09T18:33:53.322943040Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:33:53.436042 env[1732]: time="2024-02-09T18:33:53.435997784Z" level=info msg="Loading containers: start." Feb 9 18:33:53.770705 kernel: Initializing XFRM netlink socket Feb 9 18:33:53.796590 env[1732]: time="2024-02-09T18:33:53.796559901Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:33:53.910821 systemd-networkd[1533]: docker0: Link UP Feb 9 18:33:53.930796 env[1732]: time="2024-02-09T18:33:53.930765988Z" level=info msg="Loading containers: done." Feb 9 18:33:53.939603 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4203423460-merged.mount: Deactivated successfully. Feb 9 18:33:53.965714 env[1732]: time="2024-02-09T18:33:53.965651777Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:33:53.965871 env[1732]: time="2024-02-09T18:33:53.965859548Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:33:53.966001 env[1732]: time="2024-02-09T18:33:53.965978937Z" level=info msg="Daemon has completed initialization" Feb 9 18:33:53.993275 systemd[1]: Started docker.service. Feb 9 18:33:54.003158 env[1732]: time="2024-02-09T18:33:54.003091175Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:33:54.017845 systemd[1]: Reloading. Feb 9 18:33:54.078266 /usr/lib/systemd/system-generators/torcx-generator[1862]: time="2024-02-09T18:33:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:33:54.078298 /usr/lib/systemd/system-generators/torcx-generator[1862]: time="2024-02-09T18:33:54Z" level=info msg="torcx already run" Feb 9 18:33:54.151100 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:33:54.151289 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:33:54.168181 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:33:54.244215 systemd[1]: Started kubelet.service. Feb 9 18:33:54.317664 kubelet[1921]: E0209 18:33:54.317234 1921 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:33:54.320089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:33:54.320208 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:33:58.913052 env[1380]: time="2024-02-09T18:33:58.913016730Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:33:59.563551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1797344366.mount: Deactivated successfully. Feb 9 18:34:01.157525 env[1380]: time="2024-02-09T18:34:01.157478356Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:01.162385 env[1380]: time="2024-02-09T18:34:01.162351437Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:01.165462 env[1380]: time="2024-02-09T18:34:01.165423162Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:01.169019 env[1380]: time="2024-02-09T18:34:01.168982944Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:01.169862 env[1380]: time="2024-02-09T18:34:01.169836472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 18:34:01.179007 env[1380]: time="2024-02-09T18:34:01.178970073Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:34:02.905928 env[1380]: time="2024-02-09T18:34:02.905871512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:02.913726 env[1380]: time="2024-02-09T18:34:02.913665448Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:02.918450 env[1380]: time="2024-02-09T18:34:02.918421641Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:02.924012 env[1380]: time="2024-02-09T18:34:02.923974587Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:02.925353 env[1380]: time="2024-02-09T18:34:02.925312484Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 18:34:02.936670 env[1380]: time="2024-02-09T18:34:02.936631336Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:34:04.285618 env[1380]: time="2024-02-09T18:34:04.285549824Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:04.292391 env[1380]: time="2024-02-09T18:34:04.292347261Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:04.299099 env[1380]: time="2024-02-09T18:34:04.299063964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:04.304055 env[1380]: time="2024-02-09T18:34:04.304023867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:04.304727 env[1380]: time="2024-02-09T18:34:04.304699030Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 18:34:04.313029 env[1380]: time="2024-02-09T18:34:04.312991100Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:34:04.469492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:34:04.469663 systemd[1]: Stopped kubelet.service. Feb 9 18:34:04.471065 systemd[1]: Started kubelet.service. Feb 9 18:34:04.512504 kubelet[1955]: E0209 18:34:04.512438 1955 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:34:04.515428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:34:04.515552 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:34:05.473484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141580706.mount: Deactivated successfully. Feb 9 18:34:06.194617 env[1380]: time="2024-02-09T18:34:06.194572298Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.199999 env[1380]: time="2024-02-09T18:34:06.199971111Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.205597 env[1380]: time="2024-02-09T18:34:06.205558756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.208627 env[1380]: time="2024-02-09T18:34:06.208602802Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.209147 env[1380]: time="2024-02-09T18:34:06.209122892Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:34:06.217219 env[1380]: time="2024-02-09T18:34:06.217187885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:34:06.798814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1882187080.mount: Deactivated successfully. Feb 9 18:34:06.825778 env[1380]: time="2024-02-09T18:34:06.825724552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.833240 env[1380]: time="2024-02-09T18:34:06.833200444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.837175 env[1380]: time="2024-02-09T18:34:06.837149806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.842760 env[1380]: time="2024-02-09T18:34:06.842720849Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:06.843404 env[1380]: time="2024-02-09T18:34:06.843378403Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:34:06.853126 env[1380]: time="2024-02-09T18:34:06.853086240Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:34:07.679437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907800574.mount: Deactivated successfully. Feb 9 18:34:10.002287 env[1380]: time="2024-02-09T18:34:10.002224867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:10.012806 env[1380]: time="2024-02-09T18:34:10.012745068Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:10.017702 env[1380]: time="2024-02-09T18:34:10.017647352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:10.021753 env[1380]: time="2024-02-09T18:34:10.021728228Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:10.022434 env[1380]: time="2024-02-09T18:34:10.022407614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 18:34:10.030781 env[1380]: time="2024-02-09T18:34:10.030733272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:34:10.730519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount469634358.mount: Deactivated successfully. Feb 9 18:34:12.413761 env[1380]: time="2024-02-09T18:34:12.413717597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:12.419510 env[1380]: time="2024-02-09T18:34:12.419481611Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:12.425518 env[1380]: time="2024-02-09T18:34:12.425480861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:12.428519 env[1380]: time="2024-02-09T18:34:12.428477385Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:12.429082 env[1380]: time="2024-02-09T18:34:12.429054550Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 18:34:14.719480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 18:34:14.719657 systemd[1]: Stopped kubelet.service. Feb 9 18:34:14.721050 systemd[1]: Started kubelet.service. Feb 9 18:34:14.776671 kubelet[2030]: E0209 18:34:14.776624 2030 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:34:14.779550 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:34:14.779664 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:34:17.763692 systemd[1]: Stopped kubelet.service. Feb 9 18:34:17.777722 systemd[1]: Reloading. Feb 9 18:34:17.857670 /usr/lib/systemd/system-generators/torcx-generator[2059]: time="2024-02-09T18:34:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:17.857733 /usr/lib/systemd/system-generators/torcx-generator[2059]: time="2024-02-09T18:34:17Z" level=info msg="torcx already run" Feb 9 18:34:17.928974 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:17.928992 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:17.945832 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:18.042720 systemd[1]: Started kubelet.service. Feb 9 18:34:18.092338 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:18.092338 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:18.092648 kubelet[2119]: I0209 18:34:18.092391 2119 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:34:18.093576 kubelet[2119]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:18.093576 kubelet[2119]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:18.902485 kubelet[2119]: I0209 18:34:18.902458 2119 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:34:18.902651 kubelet[2119]: I0209 18:34:18.902640 2119 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:34:18.902949 kubelet[2119]: I0209 18:34:18.902935 2119 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:34:18.906389 kubelet[2119]: E0209 18:34:18.906364 2119 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:18.906479 kubelet[2119]: I0209 18:34:18.906428 2119 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:18.907628 kubelet[2119]: W0209 18:34:18.907613 2119 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:34:18.908253 kubelet[2119]: I0209 18:34:18.908239 2119 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:34:18.908551 kubelet[2119]: I0209 18:34:18.908538 2119 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:34:18.908700 kubelet[2119]: I0209 18:34:18.908665 2119 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:34:18.908830 kubelet[2119]: I0209 18:34:18.908818 2119 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:34:18.908895 kubelet[2119]: I0209 18:34:18.908887 2119 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:34:18.909031 kubelet[2119]: I0209 18:34:18.909020 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:18.911564 kubelet[2119]: I0209 18:34:18.911540 2119 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:34:18.911564 kubelet[2119]: I0209 18:34:18.911567 2119 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:34:18.911674 kubelet[2119]: I0209 18:34:18.911593 2119 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:34:18.911674 kubelet[2119]: I0209 18:34:18.911605 2119 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:34:18.912988 kubelet[2119]: I0209 18:34:18.912971 2119 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:34:18.913317 kubelet[2119]: W0209 18:34:18.913299 2119 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:34:18.913730 kubelet[2119]: I0209 18:34:18.913714 2119 server.go:1186] "Started kubelet" Feb 9 18:34:18.913926 kubelet[2119]: W0209 18:34:18.913894 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37f6c6cc7b&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:18.914030 kubelet[2119]: E0209 18:34:18.914018 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37f6c6cc7b&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:18.923254 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 18:34:18.923346 kubelet[2119]: W0209 18:34:18.915937 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:18.923346 kubelet[2119]: E0209 18:34:18.915969 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:18.923346 kubelet[2119]: E0209 18:34:18.916001 2119 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587f23a9871", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 18, 913667185, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 18, 913667185, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.32:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.32:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:34:18.923522 kubelet[2119]: I0209 18:34:18.916366 2119 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:34:18.923522 kubelet[2119]: I0209 18:34:18.916861 2119 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:34:18.924301 kubelet[2119]: I0209 18:34:18.924271 2119 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:34:18.924522 kubelet[2119]: E0209 18:34:18.924509 2119 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:34:18.924600 kubelet[2119]: E0209 18:34:18.924591 2119 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:34:18.928666 kubelet[2119]: I0209 18:34:18.928640 2119 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:34:18.928843 kubelet[2119]: I0209 18:34:18.928831 2119 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:34:18.929218 kubelet[2119]: W0209 18:34:18.929187 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:18.929315 kubelet[2119]: E0209 18:34:18.929305 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:18.929561 kubelet[2119]: E0209 18:34:18.929542 2119 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37f6c6cc7b?timeout=10s": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.061832 kubelet[2119]: I0209 18:34:19.061810 2119 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.062289 kubelet[2119]: I0209 18:34:19.062277 2119 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:34:19.062382 kubelet[2119]: I0209 18:34:19.062373 2119 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:34:19.062447 kubelet[2119]: I0209 18:34:19.062438 2119 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:19.062913 kubelet[2119]: E0209 18:34:19.062899 2119 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.088074 kubelet[2119]: I0209 18:34:19.088049 2119 policy_none.go:49] "None policy: Start" Feb 9 18:34:19.088859 kubelet[2119]: I0209 18:34:19.088845 2119 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:34:19.088966 kubelet[2119]: I0209 18:34:19.088956 2119 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:34:19.094034 kubelet[2119]: I0209 18:34:19.094017 2119 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:34:19.130089 kubelet[2119]: E0209 18:34:19.130049 2119 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37f6c6cc7b?timeout=10s": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.164802 systemd[1]: Created slice kubepods.slice. Feb 9 18:34:19.169394 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 18:34:19.171656 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 18:34:19.182313 kubelet[2119]: I0209 18:34:19.182292 2119 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:34:19.182823 kubelet[2119]: I0209 18:34:19.182810 2119 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:34:19.183955 kubelet[2119]: E0209 18:34:19.183866 2119 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-37f6c6cc7b\" not found" Feb 9 18:34:19.207888 kubelet[2119]: I0209 18:34:19.207865 2119 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:34:19.208048 kubelet[2119]: I0209 18:34:19.208037 2119 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:34:19.208119 kubelet[2119]: I0209 18:34:19.208109 2119 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:34:19.208209 kubelet[2119]: E0209 18:34:19.208201 2119 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:34:19.208760 kubelet[2119]: W0209 18:34:19.208741 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.208900 kubelet[2119]: E0209 18:34:19.208890 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.264903 kubelet[2119]: I0209 18:34:19.264880 2119 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.265382 kubelet[2119]: E0209 18:34:19.265356 2119 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.308460 kubelet[2119]: I0209 18:34:19.308438 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:19.309964 kubelet[2119]: I0209 18:34:19.309945 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:19.311531 kubelet[2119]: I0209 18:34:19.311518 2119 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:19.312123 kubelet[2119]: I0209 18:34:19.311942 2119 status_manager.go:698] "Failed to get status for pod" podUID=3916357017b776a46b2372ac85aad7af pod="kube-system/kube-scheduler-ci-3510.3.2-a-37f6c6cc7b" err="Get \"https://10.200.20.32:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-37f6c6cc7b\": dial tcp 10.200.20.32:6443: connect: connection refused" Feb 9 18:34:19.314519 kubelet[2119]: I0209 18:34:19.314468 2119 status_manager.go:698] "Failed to get status for pod" podUID=3b88b5e37fc5df321a745b9e80ad9960 pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" err="Get \"https://10.200.20.32:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\": dial tcp 10.200.20.32:6443: connect: connection refused" Feb 9 18:34:19.315787 systemd[1]: Created slice kubepods-burstable-pod3916357017b776a46b2372ac85aad7af.slice. Feb 9 18:34:19.318372 kubelet[2119]: I0209 18:34:19.318355 2119 status_manager.go:698] "Failed to get status for pod" podUID=853da4c4b85736086cad0b395d7c33f6 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" err="Get \"https://10.200.20.32:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\": dial tcp 10.200.20.32:6443: connect: connection refused" Feb 9 18:34:19.327687 systemd[1]: Created slice kubepods-burstable-pod3b88b5e37fc5df321a745b9e80ad9960.slice. Feb 9 18:34:19.331517 kubelet[2119]: I0209 18:34:19.331500 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.331669 systemd[1]: Created slice kubepods-burstable-pod853da4c4b85736086cad0b395d7c33f6.slice. Feb 9 18:34:19.332046 kubelet[2119]: I0209 18:34:19.332031 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.332387 kubelet[2119]: I0209 18:34:19.332373 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3916357017b776a46b2372ac85aad7af-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3916357017b776a46b2372ac85aad7af\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.332500 kubelet[2119]: I0209 18:34:19.332490 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.332590 kubelet[2119]: I0209 18:34:19.332581 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b88b5e37fc5df321a745b9e80ad9960-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3b88b5e37fc5df321a745b9e80ad9960\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.332722 kubelet[2119]: I0209 18:34:19.332712 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.332821 kubelet[2119]: I0209 18:34:19.332812 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.332909 kubelet[2119]: I0209 18:34:19.332899 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b88b5e37fc5df321a745b9e80ad9960-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3b88b5e37fc5df321a745b9e80ad9960\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.332997 kubelet[2119]: I0209 18:34:19.332987 2119 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b88b5e37fc5df321a745b9e80ad9960-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3b88b5e37fc5df321a745b9e80ad9960\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.530743 kubelet[2119]: E0209 18:34:19.530704 2119 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37f6c6cc7b?timeout=10s": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.630106 env[1380]: time="2024-02-09T18:34:19.629796221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-37f6c6cc7b,Uid:3916357017b776a46b2372ac85aad7af,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:19.631124 env[1380]: time="2024-02-09T18:34:19.630998331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-37f6c6cc7b,Uid:3b88b5e37fc5df321a745b9e80ad9960,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:19.634982 env[1380]: time="2024-02-09T18:34:19.634840251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b,Uid:853da4c4b85736086cad0b395d7c33f6,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:19.667524 kubelet[2119]: I0209 18:34:19.667496 2119 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.667842 kubelet[2119]: E0209 18:34:19.667824 2119 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:19.832289 kubelet[2119]: W0209 18:34:19.832157 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37f6c6cc7b&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.832289 kubelet[2119]: E0209 18:34:19.832217 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.32:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-37f6c6cc7b&limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.977808 kubelet[2119]: W0209 18:34:19.977770 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:19.977808 kubelet[2119]: E0209 18:34:19.977811 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:20.331981 kubelet[2119]: E0209 18:34:20.331942 2119 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37f6c6cc7b?timeout=10s": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:20.370524 kubelet[2119]: W0209 18:34:20.370468 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:20.370524 kubelet[2119]: E0209 18:34:20.370527 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.32:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:20.469395 kubelet[2119]: I0209 18:34:20.469365 2119 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:20.469701 kubelet[2119]: E0209 18:34:20.469628 2119 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.32:6443/api/v1/nodes\": dial tcp 10.200.20.32:6443: connect: connection refused" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:20.509452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1341665591.mount: Deactivated successfully. Feb 9 18:34:20.531365 env[1380]: time="2024-02-09T18:34:20.531312156Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.551244 env[1380]: time="2024-02-09T18:34:20.551201543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.559199 env[1380]: time="2024-02-09T18:34:20.559158514Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.562815 env[1380]: time="2024-02-09T18:34:20.562783596Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.571569 env[1380]: time="2024-02-09T18:34:20.571522302Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.576341 env[1380]: time="2024-02-09T18:34:20.576301886Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.579634 env[1380]: time="2024-02-09T18:34:20.579598408Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.584875 env[1380]: time="2024-02-09T18:34:20.584793162Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.588270 env[1380]: time="2024-02-09T18:34:20.588237462Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.592682 env[1380]: time="2024-02-09T18:34:20.592654401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.598966 env[1380]: time="2024-02-09T18:34:20.598938128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.601501 env[1380]: time="2024-02-09T18:34:20.601472237Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.674900 env[1380]: time="2024-02-09T18:34:20.669190421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:20.674900 env[1380]: time="2024-02-09T18:34:20.669227145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:20.674900 env[1380]: time="2024-02-09T18:34:20.669236946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:20.674900 env[1380]: time="2024-02-09T18:34:20.669358321Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bedf98605d8c58a93d5f1d4d30b7ac40d39c42b19255541d9a6e883e70040ec9 pid=2196 runtime=io.containerd.runc.v2 Feb 9 18:34:20.688033 env[1380]: time="2024-02-09T18:34:20.687949790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:20.688033 env[1380]: time="2024-02-09T18:34:20.687998276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:20.688982 env[1380]: time="2024-02-09T18:34:20.688009037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:20.690091 env[1380]: time="2024-02-09T18:34:20.689952154Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39e713e1edeefbbadd839ec3afa6300d1d6eedc7737341f39147d1af10310ba9 pid=2214 runtime=io.containerd.runc.v2 Feb 9 18:34:20.696984 systemd[1]: Started cri-containerd-bedf98605d8c58a93d5f1d4d30b7ac40d39c42b19255541d9a6e883e70040ec9.scope. Feb 9 18:34:20.706185 env[1380]: time="2024-02-09T18:34:20.705057157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:20.706185 env[1380]: time="2024-02-09T18:34:20.705136527Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:20.706185 env[1380]: time="2024-02-09T18:34:20.705161290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:20.706185 env[1380]: time="2024-02-09T18:34:20.705425562Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/49144d640bc89a2f4deb667f14bd87d490a32d1e98be673e22cb57ada25a0c43 pid=2242 runtime=io.containerd.runc.v2 Feb 9 18:34:20.722255 systemd[1]: Started cri-containerd-39e713e1edeefbbadd839ec3afa6300d1d6eedc7737341f39147d1af10310ba9.scope. Feb 9 18:34:20.730442 systemd[1]: Started cri-containerd-49144d640bc89a2f4deb667f14bd87d490a32d1e98be673e22cb57ada25a0c43.scope. Feb 9 18:34:20.764493 env[1380]: time="2024-02-09T18:34:20.764450965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-37f6c6cc7b,Uid:3916357017b776a46b2372ac85aad7af,Namespace:kube-system,Attempt:0,} returns sandbox id \"bedf98605d8c58a93d5f1d4d30b7ac40d39c42b19255541d9a6e883e70040ec9\"" Feb 9 18:34:20.769916 env[1380]: time="2024-02-09T18:34:20.769880587Z" level=info msg="CreateContainer within sandbox \"bedf98605d8c58a93d5f1d4d30b7ac40d39c42b19255541d9a6e883e70040ec9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:34:20.774585 env[1380]: time="2024-02-09T18:34:20.774544477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-37f6c6cc7b,Uid:3b88b5e37fc5df321a745b9e80ad9960,Namespace:kube-system,Attempt:0,} returns sandbox id \"49144d640bc89a2f4deb667f14bd87d490a32d1e98be673e22cb57ada25a0c43\"" Feb 9 18:34:20.777985 env[1380]: time="2024-02-09T18:34:20.777952892Z" level=info msg="CreateContainer within sandbox \"49144d640bc89a2f4deb667f14bd87d490a32d1e98be673e22cb57ada25a0c43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:34:20.790616 env[1380]: time="2024-02-09T18:34:20.790579473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b,Uid:853da4c4b85736086cad0b395d7c33f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"39e713e1edeefbbadd839ec3afa6300d1d6eedc7737341f39147d1af10310ba9\"" Feb 9 18:34:20.793249 env[1380]: time="2024-02-09T18:34:20.793215075Z" level=info msg="CreateContainer within sandbox \"39e713e1edeefbbadd839ec3afa6300d1d6eedc7737341f39147d1af10310ba9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:34:20.800490 kubelet[2119]: W0209 18:34:20.800424 2119 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:20.800490 kubelet[2119]: E0209 18:34:20.800469 2119 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:20.840515 env[1380]: time="2024-02-09T18:34:20.840369949Z" level=info msg="CreateContainer within sandbox \"bedf98605d8c58a93d5f1d4d30b7ac40d39c42b19255541d9a6e883e70040ec9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d\"" Feb 9 18:34:20.841740 env[1380]: time="2024-02-09T18:34:20.841712313Z" level=info msg="StartContainer for \"d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d\"" Feb 9 18:34:20.858110 env[1380]: time="2024-02-09T18:34:20.858067709Z" level=info msg="CreateContainer within sandbox \"49144d640bc89a2f4deb667f14bd87d490a32d1e98be673e22cb57ada25a0c43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fbd12548d0d965f3da253d44d5b3c7774929c82c979988ff563af667c21e69ec\"" Feb 9 18:34:20.859571 env[1380]: time="2024-02-09T18:34:20.859150921Z" level=info msg="StartContainer for \"fbd12548d0d965f3da253d44d5b3c7774929c82c979988ff563af667c21e69ec\"" Feb 9 18:34:20.861014 env[1380]: time="2024-02-09T18:34:20.860984345Z" level=info msg="CreateContainer within sandbox \"39e713e1edeefbbadd839ec3afa6300d1d6eedc7737341f39147d1af10310ba9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2\"" Feb 9 18:34:20.861645 env[1380]: time="2024-02-09T18:34:20.861616742Z" level=info msg="StartContainer for \"eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2\"" Feb 9 18:34:20.861882 systemd[1]: Started cri-containerd-d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d.scope. Feb 9 18:34:20.884697 systemd[1]: Started cri-containerd-fbd12548d0d965f3da253d44d5b3c7774929c82c979988ff563af667c21e69ec.scope. Feb 9 18:34:20.903503 systemd[1]: Started cri-containerd-eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2.scope. Feb 9 18:34:20.928480 env[1380]: time="2024-02-09T18:34:20.928422294Z" level=info msg="StartContainer for \"d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d\" returns successfully" Feb 9 18:34:20.957132 env[1380]: time="2024-02-09T18:34:20.957092952Z" level=info msg="StartContainer for \"eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2\" returns successfully" Feb 9 18:34:20.968266 env[1380]: time="2024-02-09T18:34:20.968212549Z" level=info msg="StartContainer for \"fbd12548d0d965f3da253d44d5b3c7774929c82c979988ff563af667c21e69ec\" returns successfully" Feb 9 18:34:21.082280 kubelet[2119]: E0209 18:34:21.082212 2119 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.32:6443: connect: connection refused Feb 9 18:34:22.071622 kubelet[2119]: I0209 18:34:22.071598 2119 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:24.442025 kubelet[2119]: E0209 18:34:24.441985 2119 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-37f6c6cc7b\" not found" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:24.460071 kubelet[2119]: I0209 18:34:24.460035 2119 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:24.497548 kubelet[2119]: E0209 18:34:24.497434 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587f23a9871", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 18, 913667185, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 18, 913667185, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.553348 kubelet[2119]: E0209 18:34:24.553235 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587f2e11ebd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 18, 924580541, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 18, 924580541, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.607507 kubelet[2119]: E0209 18:34:24.607409 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0be70f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61602063, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61602063, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.660843 kubelet[2119]: E0209 18:34:24.660738 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0bfb60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61607264, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61607264, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.714501 kubelet[2119]: E0209 18:34:24.714333 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0c0790", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61610384, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61610384, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.769870 kubelet[2119]: E0209 18:34:24.769749 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0be70f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61602063, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61750922, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.825549 kubelet[2119]: E0209 18:34:24.825445 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0bfb60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61607264, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61759443, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.881311 kubelet[2119]: E0209 18:34:24.881198 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0c0790", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61610384, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61764924, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.919453 kubelet[2119]: I0209 18:34:24.919408 2119 apiserver.go:52] "Watching apiserver" Feb 9 18:34:24.929260 kubelet[2119]: I0209 18:34:24.929223 2119 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:34:24.936056 kubelet[2119]: E0209 18:34:24.935972 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24588025f3c0c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 184503820, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 184503820, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:24.966324 kubelet[2119]: I0209 18:34:24.966211 2119 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:34:25.152727 kubelet[2119]: E0209 18:34:25.152600 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0be70f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61602063, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 264831337, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:25.551020 kubelet[2119]: E0209 18:34:25.550920 2119 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-37f6c6cc7b.17b24587fb0bfb60", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-37f6c6cc7b", UID:"ci-3510.3.2-a-37f6c6cc7b", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-37f6c6cc7b status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 61607264, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 34, 19, 264840018, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 9 18:34:27.141132 systemd[1]: Reloading. Feb 9 18:34:27.275933 /usr/lib/systemd/system-generators/torcx-generator[2444]: time="2024-02-09T18:34:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:27.276323 /usr/lib/systemd/system-generators/torcx-generator[2444]: time="2024-02-09T18:34:27Z" level=info msg="torcx already run" Feb 9 18:34:27.368465 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:27.368658 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:27.386767 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:27.496577 kubelet[2119]: I0209 18:34:27.496542 2119 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:27.496998 systemd[1]: Stopping kubelet.service... Feb 9 18:34:27.516186 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:34:27.516383 systemd[1]: Stopped kubelet.service. Feb 9 18:34:27.516433 systemd[1]: kubelet.service: Consumed 1.189s CPU time. Feb 9 18:34:27.518141 systemd[1]: Started kubelet.service. Feb 9 18:34:27.573293 kubelet[2503]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:27.574037 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:27.574228 kubelet[2503]: I0209 18:34:27.574195 2503 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:34:27.575750 kubelet[2503]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:27.575879 kubelet[2503]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:27.579674 kubelet[2503]: I0209 18:34:27.579642 2503 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:34:27.579674 kubelet[2503]: I0209 18:34:27.579671 2503 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:34:27.579883 kubelet[2503]: I0209 18:34:27.579864 2503 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:34:27.581423 kubelet[2503]: I0209 18:34:27.581112 2503 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:34:27.585492 kubelet[2503]: W0209 18:34:27.585147 2503 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:34:27.585697 kubelet[2503]: I0209 18:34:27.585658 2503 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:34:27.585853 kubelet[2503]: I0209 18:34:27.585659 2503 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:27.585977 kubelet[2503]: I0209 18:34:27.585882 2503 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:34:27.586172 kubelet[2503]: I0209 18:34:27.586126 2503 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:34:27.586747 kubelet[2503]: I0209 18:34:27.586728 2503 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:34:27.586835 kubelet[2503]: I0209 18:34:27.586826 2503 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:34:27.586926 kubelet[2503]: I0209 18:34:27.586916 2503 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:27.595454 kubelet[2503]: I0209 18:34:27.595377 2503 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:34:27.595750 kubelet[2503]: I0209 18:34:27.595718 2503 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:34:27.595792 kubelet[2503]: I0209 18:34:27.595780 2503 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:34:27.595824 kubelet[2503]: I0209 18:34:27.595793 2503 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:34:27.607079 kubelet[2503]: I0209 18:34:27.607047 2503 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:34:27.607455 kubelet[2503]: I0209 18:34:27.607430 2503 server.go:1186] "Started kubelet" Feb 9 18:34:27.608972 kubelet[2503]: I0209 18:34:27.608951 2503 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:34:27.620590 kubelet[2503]: I0209 18:34:27.616994 2503 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:34:27.620590 kubelet[2503]: I0209 18:34:27.617675 2503 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:34:27.620590 kubelet[2503]: I0209 18:34:27.620169 2503 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:34:27.621136 kubelet[2503]: E0209 18:34:27.621117 2503 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:34:27.621226 kubelet[2503]: E0209 18:34:27.621215 2503 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:34:27.622167 kubelet[2503]: I0209 18:34:27.622148 2503 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:34:27.656237 kubelet[2503]: I0209 18:34:27.656202 2503 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:34:27.672405 kubelet[2503]: I0209 18:34:27.672384 2503 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:34:27.672573 kubelet[2503]: I0209 18:34:27.672561 2503 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:34:27.672641 kubelet[2503]: I0209 18:34:27.672631 2503 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:34:27.672806 kubelet[2503]: E0209 18:34:27.672793 2503 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:34:27.706969 kubelet[2503]: I0209 18:34:27.706944 2503 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:34:27.707130 kubelet[2503]: I0209 18:34:27.707117 2503 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:34:27.707206 kubelet[2503]: I0209 18:34:27.707195 2503 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:27.707414 kubelet[2503]: I0209 18:34:27.707399 2503 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:34:27.707492 kubelet[2503]: I0209 18:34:27.707481 2503 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:34:27.707555 kubelet[2503]: I0209 18:34:27.707545 2503 policy_none.go:49] "None policy: Start" Feb 9 18:34:27.708190 kubelet[2503]: I0209 18:34:27.708174 2503 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:34:27.708302 kubelet[2503]: I0209 18:34:27.708290 2503 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:34:27.708491 kubelet[2503]: I0209 18:34:27.708477 2503 state_mem.go:75] "Updated machine memory state" Feb 9 18:34:27.711876 kubelet[2503]: I0209 18:34:27.711859 2503 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:34:27.712173 kubelet[2503]: I0209 18:34:27.712157 2503 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:34:27.724529 kubelet[2503]: I0209 18:34:27.724500 2503 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.736067 kubelet[2503]: I0209 18:34:27.735864 2503 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.736067 kubelet[2503]: I0209 18:34:27.735990 2503 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.774652 kubelet[2503]: I0209 18:34:27.773197 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:27.774652 kubelet[2503]: I0209 18:34:27.773284 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:27.774652 kubelet[2503]: I0209 18:34:27.773315 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:27.781240 kubelet[2503]: E0209 18:34:27.781217 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.808580 sudo[2555]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 18:34:27.808822 sudo[2555]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 18:34:27.923150 kubelet[2503]: I0209 18:34:27.923106 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3916357017b776a46b2372ac85aad7af-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3916357017b776a46b2372ac85aad7af\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.923367 kubelet[2503]: I0209 18:34:27.923356 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.923501 kubelet[2503]: I0209 18:34:27.923490 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.923617 kubelet[2503]: I0209 18:34:27.923607 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.923793 kubelet[2503]: I0209 18:34:27.923782 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.923903 kubelet[2503]: I0209 18:34:27.923893 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b88b5e37fc5df321a745b9e80ad9960-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3b88b5e37fc5df321a745b9e80ad9960\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.924023 kubelet[2503]: I0209 18:34:27.924013 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b88b5e37fc5df321a745b9e80ad9960-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3b88b5e37fc5df321a745b9e80ad9960\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.924143 kubelet[2503]: I0209 18:34:27.924132 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b88b5e37fc5df321a745b9e80ad9960-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"3b88b5e37fc5df321a745b9e80ad9960\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:27.924264 kubelet[2503]: I0209 18:34:27.924254 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/853da4c4b85736086cad0b395d7c33f6-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" (UID: \"853da4c4b85736086cad0b395d7c33f6\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:28.291876 sudo[2555]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:28.607395 kubelet[2503]: I0209 18:34:28.607286 2503 apiserver.go:52] "Watching apiserver" Feb 9 18:34:28.822792 kubelet[2503]: I0209 18:34:28.822763 2503 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:34:28.830134 kubelet[2503]: I0209 18:34:28.830113 2503 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:34:29.006052 kubelet[2503]: E0209 18:34:29.006024 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:29.405780 kubelet[2503]: E0209 18:34:29.405664 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-37f6c6cc7b\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:29.613405 kubelet[2503]: E0209 18:34:29.613377 2503 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" Feb 9 18:34:29.776198 sudo[1717]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:29.858461 sshd[1714]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:29.861248 systemd-logind[1366]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:34:29.861419 systemd[1]: sshd@4-10.200.20.32:22-10.200.12.6:54494.service: Deactivated successfully. Feb 9 18:34:29.862157 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:34:29.862342 systemd[1]: session-7.scope: Consumed 6.764s CPU time. Feb 9 18:34:29.862945 systemd-logind[1366]: Removed session 7. Feb 9 18:34:30.204262 kubelet[2503]: I0209 18:34:30.204164 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-37f6c6cc7b" podStartSLOduration=3.204115733 pod.CreationTimestamp="2024-02-09 18:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:29.808722111 +0000 UTC m=+2.283579424" watchObservedRunningTime="2024-02-09 18:34:30.204115733 +0000 UTC m=+2.678973046" Feb 9 18:34:30.204850 kubelet[2503]: I0209 18:34:30.204833 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-37f6c6cc7b" podStartSLOduration=3.204806961 pod.CreationTimestamp="2024-02-09 18:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:30.2048 +0000 UTC m=+2.679657313" watchObservedRunningTime="2024-02-09 18:34:30.204806961 +0000 UTC m=+2.679664274" Feb 9 18:34:35.328656 kubelet[2503]: I0209 18:34:35.328622 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-37f6c6cc7b" podStartSLOduration=10.328571487 pod.CreationTimestamp="2024-02-09 18:34:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:30.605536253 +0000 UTC m=+3.080393566" watchObservedRunningTime="2024-02-09 18:34:35.328571487 +0000 UTC m=+7.803428800" Feb 9 18:34:40.146661 kubelet[2503]: I0209 18:34:40.146632 2503 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:34:40.147593 env[1380]: time="2024-02-09T18:34:40.147549691Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:34:40.147977 kubelet[2503]: I0209 18:34:40.147959 2503 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:34:40.920561 kubelet[2503]: I0209 18:34:40.920516 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:40.925436 systemd[1]: Created slice kubepods-besteffort-pod50e28b8d_b680_44e1_8341_d05372373a8e.slice. Feb 9 18:34:40.957517 kubelet[2503]: I0209 18:34:40.957483 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:40.962353 systemd[1]: Created slice kubepods-burstable-podb2af5724_0b1c_472b_97c2_e6ec80acb58e.slice. Feb 9 18:34:40.991609 kubelet[2503]: I0209 18:34:40.991558 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-lib-modules\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991609 kubelet[2503]: I0209 18:34:40.991613 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-kernel\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991800 kubelet[2503]: I0209 18:34:40.991647 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tngv\" (UniqueName: \"kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-kube-api-access-2tngv\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991800 kubelet[2503]: I0209 18:34:40.991668 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50e28b8d-b680-44e1-8341-d05372373a8e-lib-modules\") pod \"kube-proxy-wq2hv\" (UID: \"50e28b8d-b680-44e1-8341-d05372373a8e\") " pod="kube-system/kube-proxy-wq2hv" Feb 9 18:34:40.991800 kubelet[2503]: I0209 18:34:40.991704 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hostproc\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991800 kubelet[2503]: I0209 18:34:40.991724 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cni-path\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991800 kubelet[2503]: I0209 18:34:40.991744 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-run\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991800 kubelet[2503]: I0209 18:34:40.991776 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-cgroup\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991992 kubelet[2503]: I0209 18:34:40.991796 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-etc-cni-netd\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991992 kubelet[2503]: I0209 18:34:40.991816 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hubble-tls\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991992 kubelet[2503]: I0209 18:34:40.991849 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/50e28b8d-b680-44e1-8341-d05372373a8e-kube-proxy\") pod \"kube-proxy-wq2hv\" (UID: \"50e28b8d-b680-44e1-8341-d05372373a8e\") " pod="kube-system/kube-proxy-wq2hv" Feb 9 18:34:40.991992 kubelet[2503]: I0209 18:34:40.991883 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-config-path\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991992 kubelet[2503]: I0209 18:34:40.991904 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-bpf-maps\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.991992 kubelet[2503]: I0209 18:34:40.991938 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-xtables-lock\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.992141 kubelet[2503]: I0209 18:34:40.991963 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-net\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.992141 kubelet[2503]: I0209 18:34:40.991983 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvb65\" (UniqueName: \"kubernetes.io/projected/50e28b8d-b680-44e1-8341-d05372373a8e-kube-api-access-lvb65\") pod \"kube-proxy-wq2hv\" (UID: \"50e28b8d-b680-44e1-8341-d05372373a8e\") " pod="kube-system/kube-proxy-wq2hv" Feb 9 18:34:40.992141 kubelet[2503]: I0209 18:34:40.992014 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2af5724-0b1c-472b-97c2-e6ec80acb58e-clustermesh-secrets\") pod \"cilium-f79m5\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " pod="kube-system/cilium-f79m5" Feb 9 18:34:40.992141 kubelet[2503]: I0209 18:34:40.992036 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50e28b8d-b680-44e1-8341-d05372373a8e-xtables-lock\") pod \"kube-proxy-wq2hv\" (UID: \"50e28b8d-b680-44e1-8341-d05372373a8e\") " pod="kube-system/kube-proxy-wq2hv" Feb 9 18:34:41.079654 kubelet[2503]: I0209 18:34:41.079605 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:41.084509 systemd[1]: Created slice kubepods-besteffort-pod944d02b6_2e26_4830_a03f_de1abcd56920.slice. Feb 9 18:34:41.092987 kubelet[2503]: I0209 18:34:41.092957 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/944d02b6-2e26-4830-a03f-de1abcd56920-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-zffst\" (UID: \"944d02b6-2e26-4830-a03f-de1abcd56920\") " pod="kube-system/cilium-operator-f59cbd8c6-zffst" Feb 9 18:34:41.094534 kubelet[2503]: I0209 18:34:41.094503 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd9qd\" (UniqueName: \"kubernetes.io/projected/944d02b6-2e26-4830-a03f-de1abcd56920-kube-api-access-fd9qd\") pod \"cilium-operator-f59cbd8c6-zffst\" (UID: \"944d02b6-2e26-4830-a03f-de1abcd56920\") " pod="kube-system/cilium-operator-f59cbd8c6-zffst" Feb 9 18:34:41.233712 env[1380]: time="2024-02-09T18:34:41.233613287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq2hv,Uid:50e28b8d-b680-44e1-8341-d05372373a8e,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:41.266899 env[1380]: time="2024-02-09T18:34:41.266803912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:41.266899 env[1380]: time="2024-02-09T18:34:41.266851235Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:41.267117 env[1380]: time="2024-02-09T18:34:41.266861916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:41.267178 env[1380]: time="2024-02-09T18:34:41.267144578Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d272a476804b8b48b60a5afbc98cba0bc3256a2234cb976dcb89ad460be31efd pid=2606 runtime=io.containerd.runc.v2 Feb 9 18:34:41.282892 systemd[1]: Started cri-containerd-d272a476804b8b48b60a5afbc98cba0bc3256a2234cb976dcb89ad460be31efd.scope. Feb 9 18:34:41.312124 env[1380]: time="2024-02-09T18:34:41.312064757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wq2hv,Uid:50e28b8d-b680-44e1-8341-d05372373a8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d272a476804b8b48b60a5afbc98cba0bc3256a2234cb976dcb89ad460be31efd\"" Feb 9 18:34:41.315274 env[1380]: time="2024-02-09T18:34:41.315233044Z" level=info msg="CreateContainer within sandbox \"d272a476804b8b48b60a5afbc98cba0bc3256a2234cb976dcb89ad460be31efd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:34:41.348388 env[1380]: time="2024-02-09T18:34:41.348296819Z" level=info msg="CreateContainer within sandbox \"d272a476804b8b48b60a5afbc98cba0bc3256a2234cb976dcb89ad460be31efd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"147871caf2f0f671d362301fef06d05cc7e5d1a16339f84d5ed4e13e63ab2083\"" Feb 9 18:34:41.349191 env[1380]: time="2024-02-09T18:34:41.349144165Z" level=info msg="StartContainer for \"147871caf2f0f671d362301fef06d05cc7e5d1a16339f84d5ed4e13e63ab2083\"" Feb 9 18:34:41.368409 systemd[1]: Started cri-containerd-147871caf2f0f671d362301fef06d05cc7e5d1a16339f84d5ed4e13e63ab2083.scope. Feb 9 18:34:41.401870 env[1380]: time="2024-02-09T18:34:41.401808667Z" level=info msg="StartContainer for \"147871caf2f0f671d362301fef06d05cc7e5d1a16339f84d5ed4e13e63ab2083\" returns successfully" Feb 9 18:34:41.566027 env[1380]: time="2024-02-09T18:34:41.565912649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f79m5,Uid:b2af5724-0b1c-472b-97c2-e6ec80acb58e,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:41.597081 env[1380]: time="2024-02-09T18:34:41.596999031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:41.597081 env[1380]: time="2024-02-09T18:34:41.597044394Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:41.597286 env[1380]: time="2024-02-09T18:34:41.597054875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:41.597496 env[1380]: time="2024-02-09T18:34:41.597445585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0 pid=2770 runtime=io.containerd.runc.v2 Feb 9 18:34:41.609508 systemd[1]: Started cri-containerd-845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0.scope. Feb 9 18:34:41.639948 env[1380]: time="2024-02-09T18:34:41.639907733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f79m5,Uid:b2af5724-0b1c-472b-97c2-e6ec80acb58e,Namespace:kube-system,Attempt:0,} returns sandbox id \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\"" Feb 9 18:34:41.641756 env[1380]: time="2024-02-09T18:34:41.641723634Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 18:34:41.987747 env[1380]: time="2024-02-09T18:34:41.987709223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-zffst,Uid:944d02b6-2e26-4830-a03f-de1abcd56920,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:42.036953 env[1380]: time="2024-02-09T18:34:42.036762712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:42.036953 env[1380]: time="2024-02-09T18:34:42.036799954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:42.036953 env[1380]: time="2024-02-09T18:34:42.036810075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:42.037352 env[1380]: time="2024-02-09T18:34:42.037281871Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6 pid=2831 runtime=io.containerd.runc.v2 Feb 9 18:34:42.048591 systemd[1]: Started cri-containerd-97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6.scope. Feb 9 18:34:42.092338 env[1380]: time="2024-02-09T18:34:42.092297876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-zffst,Uid:944d02b6-2e26-4830-a03f-de1abcd56920,Namespace:kube-system,Attempt:0,} returns sandbox id \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\"" Feb 9 18:34:42.129208 kubelet[2503]: I0209 18:34:42.128859 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-wq2hv" podStartSLOduration=2.128813387 pod.CreationTimestamp="2024-02-09 18:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:42.128213382 +0000 UTC m=+14.603070735" watchObservedRunningTime="2024-02-09 18:34:42.128813387 +0000 UTC m=+14.603670660" Feb 9 18:34:47.088607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2404695677.mount: Deactivated successfully. Feb 9 18:34:49.264618 env[1380]: time="2024-02-09T18:34:49.264573730Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:49.270895 env[1380]: time="2024-02-09T18:34:49.270863474Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:49.275492 env[1380]: time="2024-02-09T18:34:49.275465265Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:49.276082 env[1380]: time="2024-02-09T18:34:49.276051864Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 18:34:49.277957 env[1380]: time="2024-02-09T18:34:49.277572967Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 18:34:49.279785 env[1380]: time="2024-02-09T18:34:49.279706191Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:34:49.305467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1672005073.mount: Deactivated successfully. Feb 9 18:34:49.310825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2302856251.mount: Deactivated successfully. Feb 9 18:34:49.320750 env[1380]: time="2024-02-09T18:34:49.320626671Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\"" Feb 9 18:34:49.322070 env[1380]: time="2024-02-09T18:34:49.321429205Z" level=info msg="StartContainer for \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\"" Feb 9 18:34:49.338382 systemd[1]: Started cri-containerd-a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99.scope. Feb 9 18:34:49.376389 env[1380]: time="2024-02-09T18:34:49.376340828Z" level=info msg="StartContainer for \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\" returns successfully" Feb 9 18:34:49.381095 systemd[1]: cri-containerd-a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99.scope: Deactivated successfully. Feb 9 18:34:50.303905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99-rootfs.mount: Deactivated successfully. Feb 9 18:34:51.121440 env[1380]: time="2024-02-09T18:34:51.121394808Z" level=info msg="shim disconnected" id=a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99 Feb 9 18:34:51.121928 env[1380]: time="2024-02-09T18:34:51.121904681Z" level=warning msg="cleaning up after shim disconnected" id=a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99 namespace=k8s.io Feb 9 18:34:51.122000 env[1380]: time="2024-02-09T18:34:51.121987807Z" level=info msg="cleaning up dead shim" Feb 9 18:34:51.129273 env[1380]: time="2024-02-09T18:34:51.129237240Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2914 runtime=io.containerd.runc.v2\n" Feb 9 18:34:51.742572 env[1380]: time="2024-02-09T18:34:51.740830329Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:34:51.773693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183047585.mount: Deactivated successfully. Feb 9 18:34:51.787017 env[1380]: time="2024-02-09T18:34:51.786977739Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\"" Feb 9 18:34:51.789269 env[1380]: time="2024-02-09T18:34:51.788810338Z" level=info msg="StartContainer for \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\"" Feb 9 18:34:51.811268 systemd[1]: Started cri-containerd-051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96.scope. Feb 9 18:34:51.839853 env[1380]: time="2024-02-09T18:34:51.839791624Z" level=info msg="StartContainer for \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\" returns successfully" Feb 9 18:34:51.850788 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:34:51.850983 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:34:51.851815 systemd[1]: Stopping systemd-sysctl.service... Feb 9 18:34:51.853456 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:34:51.863563 systemd[1]: cri-containerd-051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96.scope: Deactivated successfully. Feb 9 18:34:51.866890 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:34:51.908362 env[1380]: time="2024-02-09T18:34:51.908300132Z" level=info msg="shim disconnected" id=051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96 Feb 9 18:34:51.908362 env[1380]: time="2024-02-09T18:34:51.908353375Z" level=warning msg="cleaning up after shim disconnected" id=051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96 namespace=k8s.io Feb 9 18:34:51.908362 env[1380]: time="2024-02-09T18:34:51.908363616Z" level=info msg="cleaning up dead shim" Feb 9 18:34:51.916518 env[1380]: time="2024-02-09T18:34:51.916451663Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2977 runtime=io.containerd.runc.v2\n" Feb 9 18:34:52.738551 env[1380]: time="2024-02-09T18:34:52.738495017Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:34:52.771162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96-rootfs.mount: Deactivated successfully. Feb 9 18:34:52.868066 env[1380]: time="2024-02-09T18:34:52.868010968Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\"" Feb 9 18:34:52.870012 env[1380]: time="2024-02-09T18:34:52.868751695Z" level=info msg="StartContainer for \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\"" Feb 9 18:34:52.889697 systemd[1]: Started cri-containerd-a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860.scope. Feb 9 18:34:52.927028 systemd[1]: cri-containerd-a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860.scope: Deactivated successfully. Feb 9 18:34:52.929416 env[1380]: time="2024-02-09T18:34:52.929379385Z" level=info msg="StartContainer for \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\" returns successfully" Feb 9 18:34:52.934523 env[1380]: time="2024-02-09T18:34:52.934485073Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:52.947354 env[1380]: time="2024-02-09T18:34:52.947320576Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.294147 env[1380]: time="2024-02-09T18:34:53.294108046Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:53.294600 env[1380]: time="2024-02-09T18:34:53.294573075Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 18:34:53.297919 env[1380]: time="2024-02-09T18:34:53.297887445Z" level=info msg="CreateContainer within sandbox \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 18:34:53.302884 env[1380]: time="2024-02-09T18:34:53.302844158Z" level=info msg="shim disconnected" id=a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860 Feb 9 18:34:53.302884 env[1380]: time="2024-02-09T18:34:53.302882640Z" level=warning msg="cleaning up after shim disconnected" id=a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860 namespace=k8s.io Feb 9 18:34:53.302986 env[1380]: time="2024-02-09T18:34:53.302892721Z" level=info msg="cleaning up dead shim" Feb 9 18:34:53.313182 env[1380]: time="2024-02-09T18:34:53.313122007Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3035 runtime=io.containerd.runc.v2\n" Feb 9 18:34:53.330661 env[1380]: time="2024-02-09T18:34:53.330618831Z" level=info msg="CreateContainer within sandbox \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\"" Feb 9 18:34:53.333228 env[1380]: time="2024-02-09T18:34:53.333167552Z" level=info msg="StartContainer for \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\"" Feb 9 18:34:53.347772 systemd[1]: Started cri-containerd-833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd.scope. Feb 9 18:34:53.380715 env[1380]: time="2024-02-09T18:34:53.380647430Z" level=info msg="StartContainer for \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\" returns successfully" Feb 9 18:34:53.741319 env[1380]: time="2024-02-09T18:34:53.741256638Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:34:53.772788 systemd[1]: run-containerd-runc-k8s.io-a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860-runc.wZDn7x.mount: Deactivated successfully. Feb 9 18:34:53.772873 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860-rootfs.mount: Deactivated successfully. Feb 9 18:34:53.780426 env[1380]: time="2024-02-09T18:34:53.780363587Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\"" Feb 9 18:34:53.781197 env[1380]: time="2024-02-09T18:34:53.781152957Z" level=info msg="StartContainer for \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\"" Feb 9 18:34:53.807159 systemd[1]: run-containerd-runc-k8s.io-e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5-runc.VCyIPO.mount: Deactivated successfully. Feb 9 18:34:53.810393 systemd[1]: Started cri-containerd-e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5.scope. Feb 9 18:34:53.844386 kubelet[2503]: I0209 18:34:53.844345 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-zffst" podStartSLOduration=-9.22337202401047e+09 pod.CreationTimestamp="2024-02-09 18:34:41 +0000 UTC" firstStartedPulling="2024-02-09 18:34:42.094992602 +0000 UTC m=+14.569849915" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:53.762032909 +0000 UTC m=+26.236890222" watchObservedRunningTime="2024-02-09 18:34:53.844306424 +0000 UTC m=+26.319163737" Feb 9 18:34:53.872179 systemd[1]: cri-containerd-e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5.scope: Deactivated successfully. Feb 9 18:34:53.874695 env[1380]: time="2024-02-09T18:34:53.874648340Z" level=info msg="StartContainer for \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\" returns successfully" Feb 9 18:34:53.901019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5-rootfs.mount: Deactivated successfully. Feb 9 18:34:53.917872 env[1380]: time="2024-02-09T18:34:53.917778783Z" level=info msg="shim disconnected" id=e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5 Feb 9 18:34:53.918399 env[1380]: time="2024-02-09T18:34:53.918370780Z" level=warning msg="cleaning up after shim disconnected" id=e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5 namespace=k8s.io Feb 9 18:34:53.918496 env[1380]: time="2024-02-09T18:34:53.918481227Z" level=info msg="cleaning up dead shim" Feb 9 18:34:53.932632 env[1380]: time="2024-02-09T18:34:53.932590718Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3131 runtime=io.containerd.runc.v2\n" Feb 9 18:34:54.745745 env[1380]: time="2024-02-09T18:34:54.745674913Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:34:54.781108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2893449204.mount: Deactivated successfully. Feb 9 18:34:54.786042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494809323.mount: Deactivated successfully. Feb 9 18:34:54.798066 env[1380]: time="2024-02-09T18:34:54.797933240Z" level=info msg="CreateContainer within sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\"" Feb 9 18:34:54.799382 env[1380]: time="2024-02-09T18:34:54.798541638Z" level=info msg="StartContainer for \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\"" Feb 9 18:34:54.814642 systemd[1]: Started cri-containerd-96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327.scope. Feb 9 18:34:54.847590 env[1380]: time="2024-02-09T18:34:54.847545283Z" level=info msg="StartContainer for \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\" returns successfully" Feb 9 18:34:54.950710 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:34:54.952239 kubelet[2503]: I0209 18:34:54.951490 2503 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:34:54.972989 kubelet[2503]: I0209 18:34:54.972956 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:54.977749 systemd[1]: Created slice kubepods-burstable-pod61d7ef77_97e1_4e22_b57d_08b32f8520d9.slice. Feb 9 18:34:54.980729 kubelet[2503]: I0209 18:34:54.980701 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:54.984875 systemd[1]: Created slice kubepods-burstable-pod27b5ff04_2306_4cca_b3fa_353bdc192e77.slice. Feb 9 18:34:55.075692 kubelet[2503]: I0209 18:34:55.075578 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmcvr\" (UniqueName: \"kubernetes.io/projected/61d7ef77-97e1-4e22-b57d-08b32f8520d9-kube-api-access-wmcvr\") pod \"coredns-787d4945fb-m7hcs\" (UID: \"61d7ef77-97e1-4e22-b57d-08b32f8520d9\") " pod="kube-system/coredns-787d4945fb-m7hcs" Feb 9 18:34:55.075692 kubelet[2503]: I0209 18:34:55.075626 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61d7ef77-97e1-4e22-b57d-08b32f8520d9-config-volume\") pod \"coredns-787d4945fb-m7hcs\" (UID: \"61d7ef77-97e1-4e22-b57d-08b32f8520d9\") " pod="kube-system/coredns-787d4945fb-m7hcs" Feb 9 18:34:55.075854 kubelet[2503]: I0209 18:34:55.075718 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwn9l\" (UniqueName: \"kubernetes.io/projected/27b5ff04-2306-4cca-b3fa-353bdc192e77-kube-api-access-vwn9l\") pod \"coredns-787d4945fb-rwd97\" (UID: \"27b5ff04-2306-4cca-b3fa-353bdc192e77\") " pod="kube-system/coredns-787d4945fb-rwd97" Feb 9 18:34:55.075854 kubelet[2503]: I0209 18:34:55.075744 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27b5ff04-2306-4cca-b3fa-353bdc192e77-config-volume\") pod \"coredns-787d4945fb-rwd97\" (UID: \"27b5ff04-2306-4cca-b3fa-353bdc192e77\") " pod="kube-system/coredns-787d4945fb-rwd97" Feb 9 18:34:55.280788 env[1380]: time="2024-02-09T18:34:55.280750934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-m7hcs,Uid:61d7ef77-97e1-4e22-b57d-08b32f8520d9,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:55.288015 env[1380]: time="2024-02-09T18:34:55.287973576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rwd97,Uid:27b5ff04-2306-4cca-b3fa-353bdc192e77,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:55.358718 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 18:34:55.761424 kubelet[2503]: I0209 18:34:55.761381 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-f79m5" podStartSLOduration=-9.223372021093431e+09 pod.CreationTimestamp="2024-02-09 18:34:40 +0000 UTC" firstStartedPulling="2024-02-09 18:34:41.641247517 +0000 UTC m=+14.116104790" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:55.760282631 +0000 UTC m=+28.235139944" watchObservedRunningTime="2024-02-09 18:34:55.761343736 +0000 UTC m=+28.236201049" Feb 9 18:34:56.991782 systemd-networkd[1533]: cilium_host: Link UP Feb 9 18:34:56.991889 systemd-networkd[1533]: cilium_net: Link UP Feb 9 18:34:56.991891 systemd-networkd[1533]: cilium_net: Gained carrier Feb 9 18:34:56.991995 systemd-networkd[1533]: cilium_host: Gained carrier Feb 9 18:34:57.000445 systemd-networkd[1533]: cilium_host: Gained IPv6LL Feb 9 18:34:57.000709 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 18:34:57.172236 systemd-networkd[1533]: cilium_vxlan: Link UP Feb 9 18:34:57.172244 systemd-networkd[1533]: cilium_vxlan: Gained carrier Feb 9 18:34:57.412712 kernel: NET: Registered PF_ALG protocol family Feb 9 18:34:57.907855 systemd-networkd[1533]: cilium_net: Gained IPv6LL Feb 9 18:34:58.159164 systemd-networkd[1533]: lxc_health: Link UP Feb 9 18:34:58.178262 systemd-networkd[1533]: lxc_health: Gained carrier Feb 9 18:34:58.178735 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:34:58.354565 systemd-networkd[1533]: lxcf469c69c18c0: Link UP Feb 9 18:34:58.366048 kernel: eth0: renamed from tmp06272 Feb 9 18:34:58.380038 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf469c69c18c0: link becomes ready Feb 9 18:34:58.379802 systemd-networkd[1533]: lxcf469c69c18c0: Gained carrier Feb 9 18:34:58.388169 systemd-networkd[1533]: lxc52fd5914817c: Link UP Feb 9 18:34:58.403709 kernel: eth0: renamed from tmp36971 Feb 9 18:34:58.415018 systemd-networkd[1533]: lxc52fd5914817c: Gained carrier Feb 9 18:34:58.415738 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc52fd5914817c: link becomes ready Feb 9 18:34:58.931869 systemd-networkd[1533]: cilium_vxlan: Gained IPv6LL Feb 9 18:34:59.444794 systemd-networkd[1533]: lxc_health: Gained IPv6LL Feb 9 18:34:59.507810 systemd-networkd[1533]: lxcf469c69c18c0: Gained IPv6LL Feb 9 18:34:59.763839 systemd-networkd[1533]: lxc52fd5914817c: Gained IPv6LL Feb 9 18:35:01.972338 env[1380]: time="2024-02-09T18:35:01.971081952Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:01.972338 env[1380]: time="2024-02-09T18:35:01.971116274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:01.972338 env[1380]: time="2024-02-09T18:35:01.971126715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:01.972338 env[1380]: time="2024-02-09T18:35:01.971219560Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/36971bbf3d42e197b55b987f7a532915747d02dfa4c44a3be9aa0cb8d40c8c2c pid=3675 runtime=io.containerd.runc.v2 Feb 9 18:35:01.994278 systemd[1]: run-containerd-runc-k8s.io-36971bbf3d42e197b55b987f7a532915747d02dfa4c44a3be9aa0cb8d40c8c2c-runc.nF0dSL.mount: Deactivated successfully. Feb 9 18:35:02.003850 systemd[1]: Started cri-containerd-36971bbf3d42e197b55b987f7a532915747d02dfa4c44a3be9aa0cb8d40c8c2c.scope. Feb 9 18:35:02.030917 env[1380]: time="2024-02-09T18:35:02.030857556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:02.031091 env[1380]: time="2024-02-09T18:35:02.031068728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:02.031201 env[1380]: time="2024-02-09T18:35:02.031179774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:02.031409 env[1380]: time="2024-02-09T18:35:02.031382145Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/06272308b9c4ca67ba7f4db8fe66e5a40a0d28a0794f30ab973c8738dc9a5a3f pid=3706 runtime=io.containerd.runc.v2 Feb 9 18:35:02.049006 systemd[1]: Started cri-containerd-06272308b9c4ca67ba7f4db8fe66e5a40a0d28a0794f30ab973c8738dc9a5a3f.scope. Feb 9 18:35:02.081738 env[1380]: time="2024-02-09T18:35:02.081701044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-rwd97,Uid:27b5ff04-2306-4cca-b3fa-353bdc192e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"36971bbf3d42e197b55b987f7a532915747d02dfa4c44a3be9aa0cb8d40c8c2c\"" Feb 9 18:35:02.085208 env[1380]: time="2024-02-09T18:35:02.085173276Z" level=info msg="CreateContainer within sandbox \"36971bbf3d42e197b55b987f7a532915747d02dfa4c44a3be9aa0cb8d40c8c2c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:35:02.102736 env[1380]: time="2024-02-09T18:35:02.102698524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-m7hcs,Uid:61d7ef77-97e1-4e22-b57d-08b32f8520d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"06272308b9c4ca67ba7f4db8fe66e5a40a0d28a0794f30ab973c8738dc9a5a3f\"" Feb 9 18:35:02.105932 env[1380]: time="2024-02-09T18:35:02.105889260Z" level=info msg="CreateContainer within sandbox \"06272308b9c4ca67ba7f4db8fe66e5a40a0d28a0794f30ab973c8738dc9a5a3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:35:02.141941 env[1380]: time="2024-02-09T18:35:02.141894649Z" level=info msg="CreateContainer within sandbox \"36971bbf3d42e197b55b987f7a532915747d02dfa4c44a3be9aa0cb8d40c8c2c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"eca6b0d3f19bb72a48cc5475bfa8c9b214e6545b4a7aac119e6ad225c8f8818e\"" Feb 9 18:35:02.142635 env[1380]: time="2024-02-09T18:35:02.142611288Z" level=info msg="StartContainer for \"eca6b0d3f19bb72a48cc5475bfa8c9b214e6545b4a7aac119e6ad225c8f8818e\"" Feb 9 18:35:02.161129 env[1380]: time="2024-02-09T18:35:02.161085149Z" level=info msg="CreateContainer within sandbox \"06272308b9c4ca67ba7f4db8fe66e5a40a0d28a0794f30ab973c8738dc9a5a3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"35db2fab522229aea51bfc26d068270ce91e124d0ad90217f8939026821bbceb\"" Feb 9 18:35:02.162361 env[1380]: time="2024-02-09T18:35:02.162334498Z" level=info msg="StartContainer for \"35db2fab522229aea51bfc26d068270ce91e124d0ad90217f8939026821bbceb\"" Feb 9 18:35:02.167009 systemd[1]: Started cri-containerd-eca6b0d3f19bb72a48cc5475bfa8c9b214e6545b4a7aac119e6ad225c8f8818e.scope. Feb 9 18:35:02.197729 systemd[1]: Started cri-containerd-35db2fab522229aea51bfc26d068270ce91e124d0ad90217f8939026821bbceb.scope. Feb 9 18:35:02.232897 env[1380]: time="2024-02-09T18:35:02.232313123Z" level=info msg="StartContainer for \"eca6b0d3f19bb72a48cc5475bfa8c9b214e6545b4a7aac119e6ad225c8f8818e\" returns successfully" Feb 9 18:35:02.246905 env[1380]: time="2024-02-09T18:35:02.246859606Z" level=info msg="StartContainer for \"35db2fab522229aea51bfc26d068270ce91e124d0ad90217f8939026821bbceb\" returns successfully" Feb 9 18:35:02.784285 kubelet[2503]: I0209 18:35:02.784246 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-m7hcs" podStartSLOduration=21.784211484 pod.CreationTimestamp="2024-02-09 18:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:02.783321194 +0000 UTC m=+35.258178507" watchObservedRunningTime="2024-02-09 18:35:02.784211484 +0000 UTC m=+35.259068797" Feb 9 18:35:02.784606 kubelet[2503]: I0209 18:35:02.784322 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-rwd97" podStartSLOduration=21.784307729 pod.CreationTimestamp="2024-02-09 18:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:02.772986904 +0000 UTC m=+35.247844217" watchObservedRunningTime="2024-02-09 18:35:02.784307729 +0000 UTC m=+35.259165042" Feb 9 18:35:02.976497 systemd[1]: run-containerd-runc-k8s.io-06272308b9c4ca67ba7f4db8fe66e5a40a0d28a0794f30ab973c8738dc9a5a3f-runc.ZSosdB.mount: Deactivated successfully. Feb 9 18:37:03.062572 systemd[1]: Started sshd@5-10.200.20.32:22-10.200.12.6:54336.service. Feb 9 18:37:03.478068 sshd[3900]: Accepted publickey for core from 10.200.12.6 port 54336 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:03.479782 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:03.483295 systemd-logind[1366]: New session 8 of user core. Feb 9 18:37:03.486214 systemd[1]: Started session-8.scope. Feb 9 18:37:04.019267 sshd[3900]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:04.021956 systemd-logind[1366]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:37:04.022528 systemd[1]: sshd@5-10.200.20.32:22-10.200.12.6:54336.service: Deactivated successfully. Feb 9 18:37:04.023320 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:37:04.024381 systemd-logind[1366]: Removed session 8. Feb 9 18:37:09.096365 systemd[1]: Started sshd@6-10.200.20.32:22-10.200.12.6:42422.service. Feb 9 18:37:09.546387 sshd[3913]: Accepted publickey for core from 10.200.12.6 port 42422 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:09.548023 sshd[3913]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:09.552140 systemd[1]: Started session-9.scope. Feb 9 18:37:09.552561 systemd-logind[1366]: New session 9 of user core. Feb 9 18:37:09.935205 sshd[3913]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:09.938342 systemd[1]: sshd@6-10.200.20.32:22-10.200.12.6:42422.service: Deactivated successfully. Feb 9 18:37:09.938492 systemd-logind[1366]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:37:09.939081 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:37:09.939836 systemd-logind[1366]: Removed session 9. Feb 9 18:37:15.011930 systemd[1]: Started sshd@7-10.200.20.32:22-10.200.12.6:42432.service. Feb 9 18:37:15.461699 sshd[3927]: Accepted publickey for core from 10.200.12.6 port 42432 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:15.463445 sshd[3927]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:15.467199 systemd-logind[1366]: New session 10 of user core. Feb 9 18:37:15.467736 systemd[1]: Started session-10.scope. Feb 9 18:37:15.843470 sshd[3927]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:15.845954 systemd-logind[1366]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:37:15.846202 systemd[1]: sshd@7-10.200.20.32:22-10.200.12.6:42432.service: Deactivated successfully. Feb 9 18:37:15.846935 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:37:15.847765 systemd-logind[1366]: Removed session 10. Feb 9 18:37:20.914743 systemd[1]: Started sshd@8-10.200.20.32:22-10.200.12.6:35380.service. Feb 9 18:37:21.335867 sshd[3939]: Accepted publickey for core from 10.200.12.6 port 35380 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:21.337144 sshd[3939]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:21.341417 systemd[1]: Started session-11.scope. Feb 9 18:37:21.341738 systemd-logind[1366]: New session 11 of user core. Feb 9 18:37:21.708046 sshd[3939]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:21.711018 systemd[1]: sshd@8-10.200.20.32:22-10.200.12.6:35380.service: Deactivated successfully. Feb 9 18:37:21.711201 systemd-logind[1366]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:37:21.711739 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:37:21.712424 systemd-logind[1366]: Removed session 11. Feb 9 18:37:26.780421 systemd[1]: Started sshd@9-10.200.20.32:22-10.200.12.6:35396.service. Feb 9 18:37:27.200555 sshd[3951]: Accepted publickey for core from 10.200.12.6 port 35396 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:27.202154 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:27.206268 systemd[1]: Started session-12.scope. Feb 9 18:37:27.206896 systemd-logind[1366]: New session 12 of user core. Feb 9 18:37:27.573614 sshd[3951]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:27.576152 systemd[1]: sshd@9-10.200.20.32:22-10.200.12.6:35396.service: Deactivated successfully. Feb 9 18:37:27.576931 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:37:27.577497 systemd-logind[1366]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:37:27.578202 systemd-logind[1366]: Removed session 12. Feb 9 18:37:27.648908 systemd[1]: Started sshd@10-10.200.20.32:22-10.200.12.6:38876.service. Feb 9 18:37:28.102236 sshd[3963]: Accepted publickey for core from 10.200.12.6 port 38876 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:28.103895 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:28.107580 systemd-logind[1366]: New session 13 of user core. Feb 9 18:37:28.108048 systemd[1]: Started session-13.scope. Feb 9 18:37:29.203218 sshd[3963]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:29.206487 systemd-logind[1366]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:37:29.207325 systemd[1]: sshd@10-10.200.20.32:22-10.200.12.6:38876.service: Deactivated successfully. Feb 9 18:37:29.208073 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:37:29.209320 systemd-logind[1366]: Removed session 13. Feb 9 18:37:29.280062 systemd[1]: Started sshd@11-10.200.20.32:22-10.200.12.6:38878.service. Feb 9 18:37:29.735604 sshd[3975]: Accepted publickey for core from 10.200.12.6 port 38878 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:29.737205 sshd[3975]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:29.741578 systemd[1]: Started session-14.scope. Feb 9 18:37:29.742021 systemd-logind[1366]: New session 14 of user core. Feb 9 18:37:30.134912 sshd[3975]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:30.137503 systemd-logind[1366]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:37:30.138031 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:37:30.138916 systemd[1]: sshd@11-10.200.20.32:22-10.200.12.6:38878.service: Deactivated successfully. Feb 9 18:37:30.139751 systemd-logind[1366]: Removed session 14. Feb 9 18:37:35.212952 systemd[1]: Started sshd@12-10.200.20.32:22-10.200.12.6:38890.service. Feb 9 18:37:35.658206 sshd[3986]: Accepted publickey for core from 10.200.12.6 port 38890 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:35.659872 sshd[3986]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:35.663721 systemd-logind[1366]: New session 15 of user core. Feb 9 18:37:35.664188 systemd[1]: Started session-15.scope. Feb 9 18:37:36.040490 sshd[3986]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:36.043264 systemd[1]: sshd@12-10.200.20.32:22-10.200.12.6:38890.service: Deactivated successfully. Feb 9 18:37:36.044472 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:37:36.045766 systemd-logind[1366]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:37:36.046879 systemd-logind[1366]: Removed session 15. Feb 9 18:37:41.111400 systemd[1]: Started sshd@13-10.200.20.32:22-10.200.12.6:49492.service. Feb 9 18:37:41.526054 sshd[3999]: Accepted publickey for core from 10.200.12.6 port 49492 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:41.527413 sshd[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:41.532378 systemd[1]: Started session-16.scope. Feb 9 18:37:41.532998 systemd-logind[1366]: New session 16 of user core. Feb 9 18:37:41.893064 sshd[3999]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:41.895968 systemd-logind[1366]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:37:41.896147 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:37:41.896825 systemd[1]: sshd@13-10.200.20.32:22-10.200.12.6:49492.service: Deactivated successfully. Feb 9 18:37:41.897918 systemd-logind[1366]: Removed session 16. Feb 9 18:37:41.967524 systemd[1]: Started sshd@14-10.200.20.32:22-10.200.12.6:49504.service. Feb 9 18:37:42.417836 sshd[4016]: Accepted publickey for core from 10.200.12.6 port 49504 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:42.419516 sshd[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:42.423501 systemd-logind[1366]: New session 17 of user core. Feb 9 18:37:42.424030 systemd[1]: Started session-17.scope. Feb 9 18:37:42.828710 sshd[4016]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:42.831378 systemd-logind[1366]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:37:42.831528 systemd[1]: sshd@14-10.200.20.32:22-10.200.12.6:49504.service: Deactivated successfully. Feb 9 18:37:42.832260 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:37:42.833001 systemd-logind[1366]: Removed session 17. Feb 9 18:37:42.899857 systemd[1]: Started sshd@15-10.200.20.32:22-10.200.12.6:49512.service. Feb 9 18:37:43.316579 sshd[4025]: Accepted publickey for core from 10.200.12.6 port 49512 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:43.318213 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:43.322441 systemd[1]: Started session-18.scope. Feb 9 18:37:43.323213 systemd-logind[1366]: New session 18 of user core. Feb 9 18:37:44.480316 sshd[4025]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:44.483416 systemd[1]: sshd@15-10.200.20.32:22-10.200.12.6:49512.service: Deactivated successfully. Feb 9 18:37:44.484189 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:37:44.485167 systemd-logind[1366]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:37:44.486152 systemd-logind[1366]: Removed session 18. Feb 9 18:37:44.553822 systemd[1]: Started sshd@16-10.200.20.32:22-10.200.12.6:49514.service. Feb 9 18:37:44.975947 sshd[4090]: Accepted publickey for core from 10.200.12.6 port 49514 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:44.977561 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:44.981601 systemd[1]: Started session-19.scope. Feb 9 18:37:44.982177 systemd-logind[1366]: New session 19 of user core. Feb 9 18:37:45.431867 sshd[4090]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:45.434982 systemd[1]: sshd@16-10.200.20.32:22-10.200.12.6:49514.service: Deactivated successfully. Feb 9 18:37:45.435765 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:37:45.436379 systemd-logind[1366]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:37:45.437298 systemd-logind[1366]: Removed session 19. Feb 9 18:37:45.508075 systemd[1]: Started sshd@17-10.200.20.32:22-10.200.12.6:49526.service. Feb 9 18:37:45.929542 sshd[4101]: Accepted publickey for core from 10.200.12.6 port 49526 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:45.931151 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:45.935397 systemd[1]: Started session-20.scope. Feb 9 18:37:45.935975 systemd-logind[1366]: New session 20 of user core. Feb 9 18:37:46.290114 sshd[4101]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:46.293044 systemd[1]: sshd@17-10.200.20.32:22-10.200.12.6:49526.service: Deactivated successfully. Feb 9 18:37:46.293795 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:37:46.294588 systemd-logind[1366]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:37:46.295350 systemd-logind[1366]: Removed session 20. Feb 9 18:37:51.362126 systemd[1]: Started sshd@18-10.200.20.32:22-10.200.12.6:40950.service. Feb 9 18:37:51.782610 sshd[4141]: Accepted publickey for core from 10.200.12.6 port 40950 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:51.784380 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:51.788773 systemd[1]: Started session-21.scope. Feb 9 18:37:51.789745 systemd-logind[1366]: New session 21 of user core. Feb 9 18:37:52.151231 sshd[4141]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:52.154263 systemd[1]: sshd@18-10.200.20.32:22-10.200.12.6:40950.service: Deactivated successfully. Feb 9 18:37:52.155054 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:37:52.156010 systemd-logind[1366]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:37:52.156824 systemd-logind[1366]: Removed session 21. Feb 9 18:37:57.226958 systemd[1]: Started sshd@19-10.200.20.32:22-10.200.12.6:55794.service. Feb 9 18:37:57.673232 sshd[4153]: Accepted publickey for core from 10.200.12.6 port 55794 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:57.675482 sshd[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:57.679579 systemd-logind[1366]: New session 22 of user core. Feb 9 18:37:57.680052 systemd[1]: Started session-22.scope. Feb 9 18:37:58.058369 sshd[4153]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:58.060916 systemd-logind[1366]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:37:58.061005 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:37:58.061926 systemd[1]: sshd@19-10.200.20.32:22-10.200.12.6:55794.service: Deactivated successfully. Feb 9 18:37:58.062697 systemd-logind[1366]: Removed session 22. Feb 9 18:38:03.129900 systemd[1]: Started sshd@20-10.200.20.32:22-10.200.12.6:55810.service. Feb 9 18:38:03.545590 sshd[4166]: Accepted publickey for core from 10.200.12.6 port 55810 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:03.547284 sshd[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:03.551417 systemd[1]: Started session-23.scope. Feb 9 18:38:03.551989 systemd-logind[1366]: New session 23 of user core. Feb 9 18:38:03.912221 sshd[4166]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:03.915130 systemd[1]: sshd@20-10.200.20.32:22-10.200.12.6:55810.service: Deactivated successfully. Feb 9 18:38:03.915860 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:38:03.916431 systemd-logind[1366]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:38:03.917149 systemd-logind[1366]: Removed session 23. Feb 9 18:38:03.982148 systemd[1]: Started sshd@21-10.200.20.32:22-10.200.12.6:55814.service. Feb 9 18:38:04.396753 sshd[4182]: Accepted publickey for core from 10.200.12.6 port 55814 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:04.398513 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:04.403060 systemd[1]: Started session-24.scope. Feb 9 18:38:04.403362 systemd-logind[1366]: New session 24 of user core. Feb 9 18:38:06.366283 env[1380]: time="2024-02-09T18:38:06.366238271Z" level=info msg="StopContainer for \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\" with timeout 30 (s)" Feb 9 18:38:06.366770 env[1380]: time="2024-02-09T18:38:06.366565843Z" level=info msg="Stop container \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\" with signal terminated" Feb 9 18:38:06.381903 systemd[1]: cri-containerd-833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd.scope: Deactivated successfully. Feb 9 18:38:06.386065 env[1380]: time="2024-02-09T18:38:06.385998171Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:38:06.394614 env[1380]: time="2024-02-09T18:38:06.394521753Z" level=info msg="StopContainer for \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\" with timeout 1 (s)" Feb 9 18:38:06.394883 env[1380]: time="2024-02-09T18:38:06.394855725Z" level=info msg="Stop container \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\" with signal terminated" Feb 9 18:38:06.403305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd-rootfs.mount: Deactivated successfully. Feb 9 18:38:06.406379 systemd-networkd[1533]: lxc_health: Link DOWN Feb 9 18:38:06.406383 systemd-networkd[1533]: lxc_health: Lost carrier Feb 9 18:38:06.423717 systemd[1]: cri-containerd-96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327.scope: Deactivated successfully. Feb 9 18:38:06.424021 systemd[1]: cri-containerd-96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327.scope: Consumed 6.391s CPU time. Feb 9 18:38:06.435770 env[1380]: time="2024-02-09T18:38:06.435663331Z" level=info msg="shim disconnected" id=833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd Feb 9 18:38:06.435770 env[1380]: time="2024-02-09T18:38:06.435766055Z" level=warning msg="cleaning up after shim disconnected" id=833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd namespace=k8s.io Feb 9 18:38:06.435770 env[1380]: time="2024-02-09T18:38:06.435776815Z" level=info msg="cleaning up dead shim" Feb 9 18:38:06.445920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327-rootfs.mount: Deactivated successfully. Feb 9 18:38:06.451944 env[1380]: time="2024-02-09T18:38:06.451886426Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4249 runtime=io.containerd.runc.v2\n" Feb 9 18:38:06.461486 env[1380]: time="2024-02-09T18:38:06.461440805Z" level=info msg="StopContainer for \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\" returns successfully" Feb 9 18:38:06.462397 env[1380]: time="2024-02-09T18:38:06.462371117Z" level=info msg="StopPodSandbox for \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\"" Feb 9 18:38:06.462752 env[1380]: time="2024-02-09T18:38:06.462720210Z" level=info msg="Container to stop \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:38:06.464931 env[1380]: time="2024-02-09T18:38:06.462659328Z" level=info msg="shim disconnected" id=96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327 Feb 9 18:38:06.465049 env[1380]: time="2024-02-09T18:38:06.465032092Z" level=warning msg="cleaning up after shim disconnected" id=96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327 namespace=k8s.io Feb 9 18:38:06.465092 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6-shm.mount: Deactivated successfully. Feb 9 18:38:06.465228 env[1380]: time="2024-02-09T18:38:06.465213258Z" level=info msg="cleaning up dead shim" Feb 9 18:38:06.474445 systemd[1]: cri-containerd-97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6.scope: Deactivated successfully. Feb 9 18:38:06.479605 env[1380]: time="2024-02-09T18:38:06.479560367Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4266 runtime=io.containerd.runc.v2\n" Feb 9 18:38:06.484790 env[1380]: time="2024-02-09T18:38:06.484743470Z" level=info msg="StopContainer for \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\" returns successfully" Feb 9 18:38:06.485401 env[1380]: time="2024-02-09T18:38:06.485376333Z" level=info msg="StopPodSandbox for \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\"" Feb 9 18:38:06.485634 env[1380]: time="2024-02-09T18:38:06.485611941Z" level=info msg="Container to stop \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:38:06.485745 env[1380]: time="2024-02-09T18:38:06.485727025Z" level=info msg="Container to stop \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:38:06.485839 env[1380]: time="2024-02-09T18:38:06.485822188Z" level=info msg="Container to stop \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:38:06.485938 env[1380]: time="2024-02-09T18:38:06.485920312Z" level=info msg="Container to stop \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:38:06.486032 env[1380]: time="2024-02-09T18:38:06.486013995Z" level=info msg="Container to stop \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:38:06.487643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0-shm.mount: Deactivated successfully. Feb 9 18:38:06.501867 systemd[1]: cri-containerd-845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0.scope: Deactivated successfully. Feb 9 18:38:06.523919 env[1380]: time="2024-02-09T18:38:06.523865737Z" level=info msg="shim disconnected" id=97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6 Feb 9 18:38:06.525807 env[1380]: time="2024-02-09T18:38:06.525776404Z" level=warning msg="cleaning up after shim disconnected" id=97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6 namespace=k8s.io Feb 9 18:38:06.525967 env[1380]: time="2024-02-09T18:38:06.524218669Z" level=info msg="shim disconnected" id=845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0 Feb 9 18:38:06.526080 env[1380]: time="2024-02-09T18:38:06.526053494Z" level=warning msg="cleaning up after shim disconnected" id=845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0 namespace=k8s.io Feb 9 18:38:06.526160 env[1380]: time="2024-02-09T18:38:06.526146417Z" level=info msg="cleaning up dead shim" Feb 9 18:38:06.526428 env[1380]: time="2024-02-09T18:38:06.526032293Z" level=info msg="cleaning up dead shim" Feb 9 18:38:06.534756 env[1380]: time="2024-02-09T18:38:06.534705481Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4314 runtime=io.containerd.runc.v2\n" Feb 9 18:38:06.535212 env[1380]: time="2024-02-09T18:38:06.535184658Z" level=info msg="TearDown network for sandbox \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" successfully" Feb 9 18:38:06.535311 env[1380]: time="2024-02-09T18:38:06.535293621Z" level=info msg="StopPodSandbox for \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" returns successfully" Feb 9 18:38:06.537061 env[1380]: time="2024-02-09T18:38:06.537036763Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4313 runtime=io.containerd.runc.v2\n" Feb 9 18:38:06.537671 env[1380]: time="2024-02-09T18:38:06.537644185Z" level=info msg="TearDown network for sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" successfully" Feb 9 18:38:06.537853 env[1380]: time="2024-02-09T18:38:06.537831951Z" level=info msg="StopPodSandbox for \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" returns successfully" Feb 9 18:38:06.711095 kubelet[2503]: I0209 18:38:06.710973 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-cgroup\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711095 kubelet[2503]: I0209 18:38:06.711034 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-config-path\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711095 kubelet[2503]: I0209 18:38:06.711055 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-net\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711095 kubelet[2503]: I0209 18:38:06.711085 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2af5724-0b1c-472b-97c2-e6ec80acb58e-clustermesh-secrets\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711500 kubelet[2503]: I0209 18:38:06.711106 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-lib-modules\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711500 kubelet[2503]: I0209 18:38:06.711123 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-kernel\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711500 kubelet[2503]: I0209 18:38:06.711152 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cni-path\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711500 kubelet[2503]: I0209 18:38:06.711171 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-etc-cni-netd\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711500 kubelet[2503]: I0209 18:38:06.711191 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/944d02b6-2e26-4830-a03f-de1abcd56920-cilium-config-path\") pod \"944d02b6-2e26-4830-a03f-de1abcd56920\" (UID: \"944d02b6-2e26-4830-a03f-de1abcd56920\") " Feb 9 18:38:06.711500 kubelet[2503]: I0209 18:38:06.711225 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tngv\" (UniqueName: \"kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-kube-api-access-2tngv\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711642 kubelet[2503]: I0209 18:38:06.711243 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hostproc\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711642 kubelet[2503]: I0209 18:38:06.711259 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-bpf-maps\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711642 kubelet[2503]: I0209 18:38:06.711278 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd9qd\" (UniqueName: \"kubernetes.io/projected/944d02b6-2e26-4830-a03f-de1abcd56920-kube-api-access-fd9qd\") pod \"944d02b6-2e26-4830-a03f-de1abcd56920\" (UID: \"944d02b6-2e26-4830-a03f-de1abcd56920\") " Feb 9 18:38:06.711642 kubelet[2503]: I0209 18:38:06.711305 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hubble-tls\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711642 kubelet[2503]: I0209 18:38:06.711348 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-xtables-lock\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711642 kubelet[2503]: I0209 18:38:06.711366 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-run\") pod \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\" (UID: \"b2af5724-0b1c-472b-97c2-e6ec80acb58e\") " Feb 9 18:38:06.711828 kubelet[2503]: I0209 18:38:06.711435 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.711828 kubelet[2503]: I0209 18:38:06.711481 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.711828 kubelet[2503]: W0209 18:38:06.711653 2503 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/b2af5724-0b1c-472b-97c2-e6ec80acb58e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:38:06.712790 kubelet[2503]: W0209 18:38:06.712057 2503 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/944d02b6-2e26-4830-a03f-de1abcd56920/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:38:06.716771 kubelet[2503]: I0209 18:38:06.713841 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:38:06.716771 kubelet[2503]: I0209 18:38:06.714163 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/944d02b6-2e26-4830-a03f-de1abcd56920-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "944d02b6-2e26-4830-a03f-de1abcd56920" (UID: "944d02b6-2e26-4830-a03f-de1abcd56920"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:38:06.716771 kubelet[2503]: I0209 18:38:06.714205 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.716771 kubelet[2503]: I0209 18:38:06.716488 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-kube-api-access-2tngv" (OuterVolumeSpecName: "kube-api-access-2tngv") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "kube-api-access-2tngv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:38:06.716965 kubelet[2503]: I0209 18:38:06.716536 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hostproc" (OuterVolumeSpecName: "hostproc") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.716965 kubelet[2503]: I0209 18:38:06.716556 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.717060 kubelet[2503]: I0209 18:38:06.716995 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.717060 kubelet[2503]: I0209 18:38:06.717025 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.717060 kubelet[2503]: I0209 18:38:06.717049 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.717144 kubelet[2503]: I0209 18:38:06.717065 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cni-path" (OuterVolumeSpecName: "cni-path") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.717144 kubelet[2503]: I0209 18:38:06.717081 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:06.719192 kubelet[2503]: I0209 18:38:06.719155 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2af5724-0b1c-472b-97c2-e6ec80acb58e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:38:06.719764 kubelet[2503]: I0209 18:38:06.719740 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944d02b6-2e26-4830-a03f-de1abcd56920-kube-api-access-fd9qd" (OuterVolumeSpecName: "kube-api-access-fd9qd") pod "944d02b6-2e26-4830-a03f-de1abcd56920" (UID: "944d02b6-2e26-4830-a03f-de1abcd56920"). InnerVolumeSpecName "kube-api-access-fd9qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:38:06.721492 kubelet[2503]: I0209 18:38:06.721457 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b2af5724-0b1c-472b-97c2-e6ec80acb58e" (UID: "b2af5724-0b1c-472b-97c2-e6ec80acb58e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:38:06.811774 kubelet[2503]: I0209 18:38:06.811717 2503 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-2tngv\" (UniqueName: \"kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-kube-api-access-2tngv\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.811774 kubelet[2503]: I0209 18:38:06.811769 2503 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hostproc\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.811774 kubelet[2503]: I0209 18:38:06.811780 2503 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-bpf-maps\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811801 2503 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fd9qd\" (UniqueName: \"kubernetes.io/projected/944d02b6-2e26-4830-a03f-de1abcd56920-kube-api-access-fd9qd\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811818 2503 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b2af5724-0b1c-472b-97c2-e6ec80acb58e-hubble-tls\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811829 2503 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-xtables-lock\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811845 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-run\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811855 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-cgroup\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811871 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cilium-config-path\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811881 2503 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-net\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812028 kubelet[2503]: I0209 18:38:06.811902 2503 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b2af5724-0b1c-472b-97c2-e6ec80acb58e-clustermesh-secrets\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812313 kubelet[2503]: I0209 18:38:06.811912 2503 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-lib-modules\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812313 kubelet[2503]: I0209 18:38:06.811932 2503 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812313 kubelet[2503]: I0209 18:38:06.811941 2503 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-cni-path\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812313 kubelet[2503]: I0209 18:38:06.811963 2503 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b2af5724-0b1c-472b-97c2-e6ec80acb58e-etc-cni-netd\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:06.812313 kubelet[2503]: I0209 18:38:06.811980 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/944d02b6-2e26-4830-a03f-de1abcd56920-cilium-config-path\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:07.072382 kubelet[2503]: I0209 18:38:07.072348 2503 scope.go:115] "RemoveContainer" containerID="96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327" Feb 9 18:38:07.074093 env[1380]: time="2024-02-09T18:38:07.074043830Z" level=info msg="RemoveContainer for \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\"" Feb 9 18:38:07.079262 systemd[1]: Removed slice kubepods-burstable-podb2af5724_0b1c_472b_97c2_e6ec80acb58e.slice. Feb 9 18:38:07.079352 systemd[1]: kubepods-burstable-podb2af5724_0b1c_472b_97c2_e6ec80acb58e.slice: Consumed 6.485s CPU time. Feb 9 18:38:07.082663 systemd[1]: Removed slice kubepods-besteffort-pod944d02b6_2e26_4830_a03f_de1abcd56920.slice. Feb 9 18:38:07.087465 env[1380]: time="2024-02-09T18:38:07.087416143Z" level=info msg="RemoveContainer for \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\" returns successfully" Feb 9 18:38:07.087697 kubelet[2503]: I0209 18:38:07.087662 2503 scope.go:115] "RemoveContainer" containerID="e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5" Feb 9 18:38:07.088613 env[1380]: time="2024-02-09T18:38:07.088580504Z" level=info msg="RemoveContainer for \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\"" Feb 9 18:38:07.097464 env[1380]: time="2024-02-09T18:38:07.097422577Z" level=info msg="RemoveContainer for \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\" returns successfully" Feb 9 18:38:07.098281 kubelet[2503]: I0209 18:38:07.098245 2503 scope.go:115] "RemoveContainer" containerID="a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860" Feb 9 18:38:07.099514 env[1380]: time="2024-02-09T18:38:07.099283643Z" level=info msg="RemoveContainer for \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\"" Feb 9 18:38:07.106874 env[1380]: time="2024-02-09T18:38:07.106783869Z" level=info msg="RemoveContainer for \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\" returns successfully" Feb 9 18:38:07.107081 kubelet[2503]: I0209 18:38:07.107066 2503 scope.go:115] "RemoveContainer" containerID="051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96" Feb 9 18:38:07.110696 env[1380]: time="2024-02-09T18:38:07.109219435Z" level=info msg="RemoveContainer for \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\"" Feb 9 18:38:07.121762 env[1380]: time="2024-02-09T18:38:07.121716797Z" level=info msg="RemoveContainer for \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\" returns successfully" Feb 9 18:38:07.122174 kubelet[2503]: I0209 18:38:07.122148 2503 scope.go:115] "RemoveContainer" containerID="a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99" Feb 9 18:38:07.124533 env[1380]: time="2024-02-09T18:38:07.124479295Z" level=info msg="RemoveContainer for \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\"" Feb 9 18:38:07.136610 env[1380]: time="2024-02-09T18:38:07.136520881Z" level=info msg="RemoveContainer for \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\" returns successfully" Feb 9 18:38:07.137387 kubelet[2503]: I0209 18:38:07.137318 2503 scope.go:115] "RemoveContainer" containerID="96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327" Feb 9 18:38:07.137982 env[1380]: time="2024-02-09T18:38:07.137894490Z" level=error msg="ContainerStatus for \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\": not found" Feb 9 18:38:07.138463 kubelet[2503]: E0209 18:38:07.138418 2503 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\": not found" containerID="96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327" Feb 9 18:38:07.138770 kubelet[2503]: I0209 18:38:07.138727 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327} err="failed to get container status \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\": rpc error: code = NotFound desc = an error occurred when try to find container \"96cdca458d56c5c605aa41558e2bee15432b0bf425b56b94a63f35a6c74dc327\": not found" Feb 9 18:38:07.138962 kubelet[2503]: I0209 18:38:07.138935 2503 scope.go:115] "RemoveContainer" containerID="e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5" Feb 9 18:38:07.139738 env[1380]: time="2024-02-09T18:38:07.139597910Z" level=error msg="ContainerStatus for \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\": not found" Feb 9 18:38:07.140093 kubelet[2503]: E0209 18:38:07.140050 2503 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\": not found" containerID="e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5" Feb 9 18:38:07.140093 kubelet[2503]: I0209 18:38:07.140080 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5} err="failed to get container status \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8f939d6a7044087b1f74d67bcb96f181f717b5498b9971cd89d19c7b996b2c5\": not found" Feb 9 18:38:07.140093 kubelet[2503]: I0209 18:38:07.140092 2503 scope.go:115] "RemoveContainer" containerID="a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860" Feb 9 18:38:07.141093 env[1380]: time="2024-02-09T18:38:07.140958078Z" level=error msg="ContainerStatus for \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\": not found" Feb 9 18:38:07.141588 kubelet[2503]: E0209 18:38:07.141564 2503 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\": not found" containerID="a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860" Feb 9 18:38:07.141588 kubelet[2503]: I0209 18:38:07.141590 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860} err="failed to get container status \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\": rpc error: code = NotFound desc = an error occurred when try to find container \"a7bcc2b6fac733a9cd858a1727b86a6624dd6cc59c34d05faa4999109ff90860\": not found" Feb 9 18:38:07.141588 kubelet[2503]: I0209 18:38:07.141600 2503 scope.go:115] "RemoveContainer" containerID="051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96" Feb 9 18:38:07.142298 env[1380]: time="2024-02-09T18:38:07.142183322Z" level=error msg="ContainerStatus for \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\": not found" Feb 9 18:38:07.142797 kubelet[2503]: E0209 18:38:07.142755 2503 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\": not found" containerID="051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96" Feb 9 18:38:07.142797 kubelet[2503]: I0209 18:38:07.142777 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96} err="failed to get container status \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\": rpc error: code = NotFound desc = an error occurred when try to find container \"051af0870bb179b3758ab0f6615bc0cfe6a4085ad82a48ee8cbb8c0042b00c96\": not found" Feb 9 18:38:07.142797 kubelet[2503]: I0209 18:38:07.142786 2503 scope.go:115] "RemoveContainer" containerID="a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99" Feb 9 18:38:07.143767 env[1380]: time="2024-02-09T18:38:07.143583931Z" level=error msg="ContainerStatus for \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\": not found" Feb 9 18:38:07.144181 kubelet[2503]: E0209 18:38:07.144149 2503 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\": not found" containerID="a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99" Feb 9 18:38:07.144181 kubelet[2503]: I0209 18:38:07.144173 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99} err="failed to get container status \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\": rpc error: code = NotFound desc = an error occurred when try to find container \"a10e1e148cd231e9d5e3e0e1c1e4512495ca966d4510acba5aaa4c92aee3dd99\": not found" Feb 9 18:38:07.144181 kubelet[2503]: I0209 18:38:07.144182 2503 scope.go:115] "RemoveContainer" containerID="833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd" Feb 9 18:38:07.145872 env[1380]: time="2024-02-09T18:38:07.145817970Z" level=info msg="RemoveContainer for \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\"" Feb 9 18:38:07.155483 env[1380]: time="2024-02-09T18:38:07.155399990Z" level=info msg="RemoveContainer for \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\" returns successfully" Feb 9 18:38:07.155971 kubelet[2503]: I0209 18:38:07.155914 2503 scope.go:115] "RemoveContainer" containerID="833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd" Feb 9 18:38:07.156208 env[1380]: time="2024-02-09T18:38:07.156149976Z" level=error msg="ContainerStatus for \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\": not found" Feb 9 18:38:07.156329 kubelet[2503]: E0209 18:38:07.156316 2503 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\": not found" containerID="833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd" Feb 9 18:38:07.156410 kubelet[2503]: I0209 18:38:07.156359 2503 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd} err="failed to get container status \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\": rpc error: code = NotFound desc = an error occurred when try to find container \"833aee13603c77cff3aacebc1b0407c8ce4daaeb3c51bae56df49b8c5197b3fd\": not found" Feb 9 18:38:07.359996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6-rootfs.mount: Deactivated successfully. Feb 9 18:38:07.360082 systemd[1]: var-lib-kubelet-pods-944d02b6\x2d2e26\x2d4830\x2da03f\x2dde1abcd56920-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfd9qd.mount: Deactivated successfully. Feb 9 18:38:07.360141 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0-rootfs.mount: Deactivated successfully. Feb 9 18:38:07.360195 systemd[1]: var-lib-kubelet-pods-b2af5724\x2d0b1c\x2d472b\x2d97c2\x2de6ec80acb58e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2tngv.mount: Deactivated successfully. Feb 9 18:38:07.360246 systemd[1]: var-lib-kubelet-pods-b2af5724\x2d0b1c\x2d472b\x2d97c2\x2de6ec80acb58e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:38:07.360298 systemd[1]: var-lib-kubelet-pods-b2af5724\x2d0b1c\x2d472b\x2d97c2\x2de6ec80acb58e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:38:07.679315 kubelet[2503]: I0209 18:38:07.678450 2503 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=944d02b6-2e26-4830-a03f-de1abcd56920 path="/var/lib/kubelet/pods/944d02b6-2e26-4830-a03f-de1abcd56920/volumes" Feb 9 18:38:07.679315 kubelet[2503]: I0209 18:38:07.678856 2503 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=b2af5724-0b1c-472b-97c2-e6ec80acb58e path="/var/lib/kubelet/pods/b2af5724-0b1c-472b-97c2-e6ec80acb58e/volumes" Feb 9 18:38:07.758449 kubelet[2503]: E0209 18:38:07.758408 2503 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:38:08.359600 sshd[4182]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:08.362421 systemd[1]: sshd@21-10.200.20.32:22-10.200.12.6:55814.service: Deactivated successfully. Feb 9 18:38:08.363154 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:38:08.363338 systemd[1]: session-24.scope: Consumed 1.069s CPU time. Feb 9 18:38:08.363773 systemd-logind[1366]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:38:08.364767 systemd-logind[1366]: Removed session 24. Feb 9 18:38:08.429080 systemd[1]: Started sshd@22-10.200.20.32:22-10.200.12.6:45988.service. Feb 9 18:38:08.845695 sshd[4345]: Accepted publickey for core from 10.200.12.6 port 45988 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:08.846987 sshd[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:08.850732 systemd-logind[1366]: New session 25 of user core. Feb 9 18:38:08.851481 systemd[1]: Started session-25.scope. Feb 9 18:38:10.097194 kubelet[2503]: I0209 18:38:10.097137 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:10.097194 kubelet[2503]: E0209 18:38:10.097200 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2af5724-0b1c-472b-97c2-e6ec80acb58e" containerName="mount-bpf-fs" Feb 9 18:38:10.097548 kubelet[2503]: E0209 18:38:10.097210 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2af5724-0b1c-472b-97c2-e6ec80acb58e" containerName="clean-cilium-state" Feb 9 18:38:10.097548 kubelet[2503]: E0209 18:38:10.097218 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2af5724-0b1c-472b-97c2-e6ec80acb58e" containerName="mount-cgroup" Feb 9 18:38:10.097548 kubelet[2503]: E0209 18:38:10.097224 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2af5724-0b1c-472b-97c2-e6ec80acb58e" containerName="apply-sysctl-overwrites" Feb 9 18:38:10.097548 kubelet[2503]: E0209 18:38:10.097241 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="944d02b6-2e26-4830-a03f-de1abcd56920" containerName="cilium-operator" Feb 9 18:38:10.097548 kubelet[2503]: E0209 18:38:10.097248 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b2af5724-0b1c-472b-97c2-e6ec80acb58e" containerName="cilium-agent" Feb 9 18:38:10.097548 kubelet[2503]: I0209 18:38:10.097280 2503 memory_manager.go:346] "RemoveStaleState removing state" podUID="b2af5724-0b1c-472b-97c2-e6ec80acb58e" containerName="cilium-agent" Feb 9 18:38:10.097548 kubelet[2503]: I0209 18:38:10.097286 2503 memory_manager.go:346] "RemoveStaleState removing state" podUID="944d02b6-2e26-4830-a03f-de1abcd56920" containerName="cilium-operator" Feb 9 18:38:10.102559 systemd[1]: Created slice kubepods-burstable-pod4cff46d2_a54b_4121_93e5_ddce147b6e13.slice. Feb 9 18:38:10.106819 kubelet[2503]: W0209 18:38:10.106762 2503 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.106968 kubelet[2503]: E0209 18:38:10.106956 2503 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.107090 kubelet[2503]: W0209 18:38:10.107078 2503 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.107175 kubelet[2503]: E0209 18:38:10.107165 2503 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.107281 kubelet[2503]: W0209 18:38:10.107269 2503 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.107412 kubelet[2503]: E0209 18:38:10.107395 2503 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.107489 kubelet[2503]: W0209 18:38:10.107370 2503 reflector.go:424] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.107567 kubelet[2503]: E0209 18:38:10.107557 2503 reflector.go:140] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:ci-3510.3.2-a-37f6c6cc7b" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-37f6c6cc7b' and this object Feb 9 18:38:10.129808 kubelet[2503]: I0209 18:38:10.129784 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-config-path\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.129979 kubelet[2503]: I0209 18:38:10.129968 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cni-path\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.130245 kubelet[2503]: I0209 18:38:10.130221 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-ipsec-secrets\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.130544 kubelet[2503]: I0209 18:38:10.130350 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-etc-cni-netd\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.130736 kubelet[2503]: I0209 18:38:10.130722 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-net\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.131244 kubelet[2503]: I0209 18:38:10.131217 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-run\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132105 kubelet[2503]: I0209 18:38:10.132073 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-hostproc\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132190 kubelet[2503]: I0209 18:38:10.132141 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-lib-modules\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132190 kubelet[2503]: I0209 18:38:10.132169 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-xtables-lock\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132246 kubelet[2503]: I0209 18:38:10.132197 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-clustermesh-secrets\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132246 kubelet[2503]: I0209 18:38:10.132222 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-kernel\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132246 kubelet[2503]: I0209 18:38:10.132245 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-bpf-maps\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132320 kubelet[2503]: I0209 18:38:10.132269 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-hubble-tls\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132320 kubelet[2503]: I0209 18:38:10.132293 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsllj\" (UniqueName: \"kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-kube-api-access-rsllj\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.132320 kubelet[2503]: I0209 18:38:10.132317 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-cgroup\") pod \"cilium-frwnt\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " pod="kube-system/cilium-frwnt" Feb 9 18:38:10.155550 sshd[4345]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:10.159218 systemd-logind[1366]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:38:10.159378 systemd[1]: sshd@22-10.200.20.32:22-10.200.12.6:45988.service: Deactivated successfully. Feb 9 18:38:10.160077 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:38:10.161142 systemd-logind[1366]: Removed session 25. Feb 9 18:38:10.230994 systemd[1]: Started sshd@23-10.200.20.32:22-10.200.12.6:45998.service. Feb 9 18:38:10.680712 sshd[4356]: Accepted publickey for core from 10.200.12.6 port 45998 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:10.681949 sshd[4356]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:10.686540 systemd[1]: Started session-26.scope. Feb 9 18:38:10.687019 systemd-logind[1366]: New session 26 of user core. Feb 9 18:38:11.083897 sshd[4356]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:11.086147 systemd[1]: sshd@23-10.200.20.32:22-10.200.12.6:45998.service: Deactivated successfully. Feb 9 18:38:11.086874 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 18:38:11.087380 systemd-logind[1366]: Session 26 logged out. Waiting for processes to exit. Feb 9 18:38:11.088060 systemd-logind[1366]: Removed session 26. Feb 9 18:38:11.153312 systemd[1]: Started sshd@24-10.200.20.32:22-10.200.12.6:46010.service. Feb 9 18:38:11.234043 kubelet[2503]: E0209 18:38:11.234014 2503 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 18:38:11.234417 kubelet[2503]: E0209 18:38:11.234404 2503 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-frwnt: failed to sync secret cache: timed out waiting for the condition Feb 9 18:38:11.234566 kubelet[2503]: E0209 18:38:11.234552 2503 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-hubble-tls podName:4cff46d2-a54b-4121-93e5-ddce147b6e13 nodeName:}" failed. No retries permitted until 2024-02-09 18:38:11.734530619 +0000 UTC m=+224.209387932 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-hubble-tls") pod "cilium-frwnt" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13") : failed to sync secret cache: timed out waiting for the condition Feb 9 18:38:11.234672 kubelet[2503]: E0209 18:38:11.234661 2503 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Feb 9 18:38:11.234820 kubelet[2503]: E0209 18:38:11.234806 2503 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-clustermesh-secrets podName:4cff46d2-a54b-4121-93e5-ddce147b6e13 nodeName:}" failed. No retries permitted until 2024-02-09 18:38:11.734795148 +0000 UTC m=+224.209652461 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-clustermesh-secrets") pod "cilium-frwnt" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13") : failed to sync secret cache: timed out waiting for the condition Feb 9 18:38:11.234901 kubelet[2503]: E0209 18:38:11.234380 2503 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 18:38:11.235028 kubelet[2503]: E0209 18:38:11.235010 2503 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-config-path podName:4cff46d2-a54b-4121-93e5-ddce147b6e13 nodeName:}" failed. No retries permitted until 2024-02-09 18:38:11.734999115 +0000 UTC m=+224.209856388 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-config-path") pod "cilium-frwnt" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13") : failed to sync configmap cache: timed out waiting for the condition Feb 9 18:38:11.568324 sshd[4370]: Accepted publickey for core from 10.200.12.6 port 46010 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:38:11.569889 sshd[4370]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:38:11.573765 systemd-logind[1366]: New session 27 of user core. Feb 9 18:38:11.574058 systemd[1]: Started session-27.scope. Feb 9 18:38:11.906258 env[1380]: time="2024-02-09T18:38:11.906155911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-frwnt,Uid:4cff46d2-a54b-4121-93e5-ddce147b6e13,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:11.944334 env[1380]: time="2024-02-09T18:38:11.944254495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:11.944477 env[1380]: time="2024-02-09T18:38:11.944336498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:11.944477 env[1380]: time="2024-02-09T18:38:11.944363259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:11.944540 env[1380]: time="2024-02-09T18:38:11.944488744Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8 pid=4389 runtime=io.containerd.runc.v2 Feb 9 18:38:11.957993 systemd[1]: Started cri-containerd-fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8.scope. Feb 9 18:38:11.980965 env[1380]: time="2024-02-09T18:38:11.980912388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-frwnt,Uid:4cff46d2-a54b-4121-93e5-ddce147b6e13,Namespace:kube-system,Attempt:0,} returns sandbox id \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\"" Feb 9 18:38:11.983875 env[1380]: time="2024-02-09T18:38:11.983439238Z" level=info msg="CreateContainer within sandbox \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:38:12.019550 env[1380]: time="2024-02-09T18:38:12.019469548Z" level=info msg="CreateContainer within sandbox \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\"" Feb 9 18:38:12.021190 env[1380]: time="2024-02-09T18:38:12.020177733Z" level=info msg="StartContainer for \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\"" Feb 9 18:38:12.034466 systemd[1]: Started cri-containerd-14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1.scope. Feb 9 18:38:12.044110 systemd[1]: cri-containerd-14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1.scope: Deactivated successfully. Feb 9 18:38:12.044417 systemd[1]: Stopped cri-containerd-14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1.scope. Feb 9 18:38:12.118525 env[1380]: time="2024-02-09T18:38:12.118461197Z" level=info msg="shim disconnected" id=14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1 Feb 9 18:38:12.118525 env[1380]: time="2024-02-09T18:38:12.118521879Z" level=warning msg="cleaning up after shim disconnected" id=14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1 namespace=k8s.io Feb 9 18:38:12.118525 env[1380]: time="2024-02-09T18:38:12.118531000Z" level=info msg="cleaning up dead shim" Feb 9 18:38:12.125464 env[1380]: time="2024-02-09T18:38:12.125415682Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4449 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T18:38:12Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 18:38:12.125778 env[1380]: time="2024-02-09T18:38:12.125643890Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Feb 9 18:38:12.127796 env[1380]: time="2024-02-09T18:38:12.127750405Z" level=error msg="Failed to pipe stdout of container \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\"" error="reading from a closed fifo" Feb 9 18:38:12.128757 env[1380]: time="2024-02-09T18:38:12.128726759Z" level=error msg="Failed to pipe stderr of container \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\"" error="reading from a closed fifo" Feb 9 18:38:12.133042 env[1380]: time="2024-02-09T18:38:12.132981989Z" level=error msg="StartContainer for \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 18:38:12.133280 kubelet[2503]: E0209 18:38:12.133254 2503 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1" Feb 9 18:38:12.133417 kubelet[2503]: E0209 18:38:12.133398 2503 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 18:38:12.133417 kubelet[2503]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 18:38:12.133417 kubelet[2503]: rm /hostbin/cilium-mount Feb 9 18:38:12.133417 kubelet[2503]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rsllj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-frwnt_kube-system(4cff46d2-a54b-4121-93e5-ddce147b6e13): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 18:38:12.133567 kubelet[2503]: E0209 18:38:12.133444 2503 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-frwnt" podUID=4cff46d2-a54b-4121-93e5-ddce147b6e13 Feb 9 18:38:12.571989 kubelet[2503]: I0209 18:38:12.571960 2503 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-37f6c6cc7b" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 18:38:12.57191566 +0000 UTC m=+225.046772973 LastTransitionTime:2024-02-09 18:38:12.57191566 +0000 UTC m=+225.046772973 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 18:38:12.759422 kubelet[2503]: E0209 18:38:12.759391 2503 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:38:13.088619 env[1380]: time="2024-02-09T18:38:13.088574027Z" level=info msg="StopPodSandbox for \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\"" Feb 9 18:38:13.090599 env[1380]: time="2024-02-09T18:38:13.088641590Z" level=info msg="Container to stop \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:38:13.090188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8-shm.mount: Deactivated successfully. Feb 9 18:38:13.102039 systemd[1]: cri-containerd-fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8.scope: Deactivated successfully. Feb 9 18:38:13.124986 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8-rootfs.mount: Deactivated successfully. Feb 9 18:38:13.147343 env[1380]: time="2024-02-09T18:38:13.147282535Z" level=info msg="shim disconnected" id=fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8 Feb 9 18:38:13.147571 env[1380]: time="2024-02-09T18:38:13.147553024Z" level=warning msg="cleaning up after shim disconnected" id=fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8 namespace=k8s.io Feb 9 18:38:13.147653 env[1380]: time="2024-02-09T18:38:13.147639227Z" level=info msg="cleaning up dead shim" Feb 9 18:38:13.155035 env[1380]: time="2024-02-09T18:38:13.154993486Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4480 runtime=io.containerd.runc.v2\n" Feb 9 18:38:13.155324 env[1380]: time="2024-02-09T18:38:13.155295537Z" level=info msg="TearDown network for sandbox \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" successfully" Feb 9 18:38:13.155376 env[1380]: time="2024-02-09T18:38:13.155323098Z" level=info msg="StopPodSandbox for \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" returns successfully" Feb 9 18:38:13.251962 kubelet[2503]: I0209 18:38:13.251921 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cni-path\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.251962 kubelet[2503]: I0209 18:38:13.251968 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-lib-modules\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252158 kubelet[2503]: I0209 18:38:13.251988 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-kernel\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252158 kubelet[2503]: I0209 18:38:13.252008 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-net\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252158 kubelet[2503]: I0209 18:38:13.252027 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-run\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252158 kubelet[2503]: I0209 18:38:13.252051 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-ipsec-secrets\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252158 kubelet[2503]: I0209 18:38:13.252070 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-cgroup\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252158 kubelet[2503]: I0209 18:38:13.252089 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-hubble-tls\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252306 kubelet[2503]: I0209 18:38:13.252127 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rsllj\" (UniqueName: \"kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-kube-api-access-rsllj\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252306 kubelet[2503]: I0209 18:38:13.252148 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-config-path\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252306 kubelet[2503]: I0209 18:38:13.252176 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-hostproc\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252306 kubelet[2503]: I0209 18:38:13.252192 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-xtables-lock\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252306 kubelet[2503]: I0209 18:38:13.252208 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-bpf-maps\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252306 kubelet[2503]: I0209 18:38:13.252227 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-etc-cni-netd\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.252511 kubelet[2503]: I0209 18:38:13.252245 2503 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-clustermesh-secrets\") pod \"4cff46d2-a54b-4121-93e5-ddce147b6e13\" (UID: \"4cff46d2-a54b-4121-93e5-ddce147b6e13\") " Feb 9 18:38:13.257000 kubelet[2503]: I0209 18:38:13.252580 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257000 kubelet[2503]: I0209 18:38:13.252618 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cni-path" (OuterVolumeSpecName: "cni-path") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257000 kubelet[2503]: I0209 18:38:13.252632 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257000 kubelet[2503]: I0209 18:38:13.252646 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257000 kubelet[2503]: I0209 18:38:13.252669 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.256401 systemd[1]: var-lib-kubelet-pods-4cff46d2\x2da54b\x2d4121\x2d93e5\x2dddce147b6e13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 18:38:13.257467 kubelet[2503]: I0209 18:38:13.252709 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257467 kubelet[2503]: I0209 18:38:13.252881 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-hostproc" (OuterVolumeSpecName: "hostproc") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257467 kubelet[2503]: W0209 18:38:13.253181 2503 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4cff46d2-a54b-4121-93e5-ddce147b6e13/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 18:38:13.257467 kubelet[2503]: I0209 18:38:13.254900 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257467 kubelet[2503]: I0209 18:38:13.254935 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257645 kubelet[2503]: I0209 18:38:13.254965 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:38:13.257645 kubelet[2503]: I0209 18:38:13.257342 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:38:13.259323 kubelet[2503]: I0209 18:38:13.259113 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:38:13.262413 systemd[1]: var-lib-kubelet-pods-4cff46d2\x2da54b\x2d4121\x2d93e5\x2dddce147b6e13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 18:38:13.266481 kubelet[2503]: I0209 18:38:13.266455 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:38:13.267041 kubelet[2503]: I0209 18:38:13.267011 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-kube-api-access-rsllj" (OuterVolumeSpecName: "kube-api-access-rsllj") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "kube-api-access-rsllj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:38:13.267182 kubelet[2503]: I0209 18:38:13.267161 2503 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "4cff46d2-a54b-4121-93e5-ddce147b6e13" (UID: "4cff46d2-a54b-4121-93e5-ddce147b6e13"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:38:13.353167 kubelet[2503]: I0209 18:38:13.353065 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.353167 kubelet[2503]: I0209 18:38:13.353099 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-cgroup\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.353167 kubelet[2503]: I0209 18:38:13.353110 2503 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-hubble-tls\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.353167 kubelet[2503]: I0209 18:38:13.353120 2503 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-rsllj\" (UniqueName: \"kubernetes.io/projected/4cff46d2-a54b-4121-93e5-ddce147b6e13-kube-api-access-rsllj\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.353167 kubelet[2503]: I0209 18:38:13.353132 2503 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-bpf-maps\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.353167 kubelet[2503]: I0209 18:38:13.353143 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-config-path\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.353942 kubelet[2503]: I0209 18:38:13.353153 2503 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-hostproc\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354112 kubelet[2503]: I0209 18:38:13.354099 2503 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-xtables-lock\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354257 kubelet[2503]: I0209 18:38:13.354246 2503 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-etc-cni-netd\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354321 kubelet[2503]: I0209 18:38:13.354312 2503 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4cff46d2-a54b-4121-93e5-ddce147b6e13-clustermesh-secrets\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354383 kubelet[2503]: I0209 18:38:13.354374 2503 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cni-path\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354445 kubelet[2503]: I0209 18:38:13.354436 2503 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-lib-modules\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354510 kubelet[2503]: I0209 18:38:13.354501 2503 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354572 kubelet[2503]: I0209 18:38:13.354563 2503 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-host-proc-sys-net\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.354635 kubelet[2503]: I0209 18:38:13.354626 2503 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4cff46d2-a54b-4121-93e5-ddce147b6e13-cilium-run\") on node \"ci-3510.3.2-a-37f6c6cc7b\" DevicePath \"\"" Feb 9 18:38:13.679200 systemd[1]: Removed slice kubepods-burstable-pod4cff46d2_a54b_4121_93e5_ddce147b6e13.slice. Feb 9 18:38:13.748208 systemd[1]: var-lib-kubelet-pods-4cff46d2\x2da54b\x2d4121\x2d93e5\x2dddce147b6e13-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 18:38:13.748316 systemd[1]: var-lib-kubelet-pods-4cff46d2\x2da54b\x2d4121\x2d93e5\x2dddce147b6e13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drsllj.mount: Deactivated successfully. Feb 9 18:38:14.091580 kubelet[2503]: I0209 18:38:14.091543 2503 scope.go:115] "RemoveContainer" containerID="14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1" Feb 9 18:38:14.094319 env[1380]: time="2024-02-09T18:38:14.094029353Z" level=info msg="RemoveContainer for \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\"" Feb 9 18:38:14.103174 env[1380]: time="2024-02-09T18:38:14.103081392Z" level=info msg="RemoveContainer for \"14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1\" returns successfully" Feb 9 18:38:14.121090 kubelet[2503]: I0209 18:38:14.121058 2503 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:38:14.121288 kubelet[2503]: E0209 18:38:14.121275 2503 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4cff46d2-a54b-4121-93e5-ddce147b6e13" containerName="mount-cgroup" Feb 9 18:38:14.121374 kubelet[2503]: I0209 18:38:14.121364 2503 memory_manager.go:346] "RemoveStaleState removing state" podUID="4cff46d2-a54b-4121-93e5-ddce147b6e13" containerName="mount-cgroup" Feb 9 18:38:14.126550 systemd[1]: Created slice kubepods-burstable-pod3f7dbd48_ced7_423f_9ced_6df5d8657cdf.slice. Feb 9 18:38:14.159750 kubelet[2503]: I0209 18:38:14.159715 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-etc-cni-netd\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.159942 kubelet[2503]: I0209 18:38:14.159774 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-host-proc-sys-net\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.159942 kubelet[2503]: I0209 18:38:14.159798 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-cilium-run\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.159942 kubelet[2503]: I0209 18:38:14.159819 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-hubble-tls\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.159942 kubelet[2503]: I0209 18:38:14.159849 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztdqr\" (UniqueName: \"kubernetes.io/projected/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-kube-api-access-ztdqr\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.159942 kubelet[2503]: I0209 18:38:14.159869 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-xtables-lock\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.159942 kubelet[2503]: I0209 18:38:14.159891 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-cilium-config-path\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160098 kubelet[2503]: I0209 18:38:14.159917 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-host-proc-sys-kernel\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160098 kubelet[2503]: I0209 18:38:14.159937 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-cilium-cgroup\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160098 kubelet[2503]: I0209 18:38:14.159957 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-clustermesh-secrets\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160098 kubelet[2503]: I0209 18:38:14.159990 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-cni-path\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160098 kubelet[2503]: I0209 18:38:14.160012 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-cilium-ipsec-secrets\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160098 kubelet[2503]: I0209 18:38:14.160034 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-bpf-maps\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160290 kubelet[2503]: I0209 18:38:14.160052 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-hostproc\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.160290 kubelet[2503]: I0209 18:38:14.160082 2503 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f7dbd48-ced7-423f-9ced-6df5d8657cdf-lib-modules\") pod \"cilium-rzkvs\" (UID: \"3f7dbd48-ced7-423f-9ced-6df5d8657cdf\") " pod="kube-system/cilium-rzkvs" Feb 9 18:38:14.430313 env[1380]: time="2024-02-09T18:38:14.429626642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzkvs,Uid:3f7dbd48-ced7-423f-9ced-6df5d8657cdf,Namespace:kube-system,Attempt:0,}" Feb 9 18:38:14.459973 env[1380]: time="2024-02-09T18:38:14.459894667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:38:14.460150 env[1380]: time="2024-02-09T18:38:14.460127515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:38:14.460229 env[1380]: time="2024-02-09T18:38:14.460209638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:38:14.460537 env[1380]: time="2024-02-09T18:38:14.460468847Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380 pid=4508 runtime=io.containerd.runc.v2 Feb 9 18:38:14.471665 systemd[1]: Started cri-containerd-f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380.scope. Feb 9 18:38:14.496128 env[1380]: time="2024-02-09T18:38:14.496087461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rzkvs,Uid:3f7dbd48-ced7-423f-9ced-6df5d8657cdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\"" Feb 9 18:38:14.502939 env[1380]: time="2024-02-09T18:38:14.502901460Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 18:38:14.531736 env[1380]: time="2024-02-09T18:38:14.531665352Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738\"" Feb 9 18:38:14.534052 env[1380]: time="2024-02-09T18:38:14.533818028Z" level=info msg="StartContainer for \"8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738\"" Feb 9 18:38:14.549254 systemd[1]: Started cri-containerd-8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738.scope. Feb 9 18:38:14.579635 env[1380]: time="2024-02-09T18:38:14.579590719Z" level=info msg="StartContainer for \"8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738\" returns successfully" Feb 9 18:38:14.586088 systemd[1]: cri-containerd-8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738.scope: Deactivated successfully. Feb 9 18:38:14.652530 env[1380]: time="2024-02-09T18:38:14.652485644Z" level=info msg="shim disconnected" id=8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738 Feb 9 18:38:14.652899 env[1380]: time="2024-02-09T18:38:14.652869777Z" level=warning msg="cleaning up after shim disconnected" id=8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738 namespace=k8s.io Feb 9 18:38:14.653006 env[1380]: time="2024-02-09T18:38:14.652991422Z" level=info msg="cleaning up dead shim" Feb 9 18:38:14.660745 env[1380]: time="2024-02-09T18:38:14.660713333Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4590 runtime=io.containerd.runc.v2\n" Feb 9 18:38:15.096376 env[1380]: time="2024-02-09T18:38:15.096329139Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 18:38:15.138271 env[1380]: time="2024-02-09T18:38:15.138225052Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9\"" Feb 9 18:38:15.139112 env[1380]: time="2024-02-09T18:38:15.139086042Z" level=info msg="StartContainer for \"2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9\"" Feb 9 18:38:15.158088 systemd[1]: Started cri-containerd-2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9.scope. Feb 9 18:38:15.188270 systemd[1]: cri-containerd-2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9.scope: Deactivated successfully. Feb 9 18:38:15.191303 env[1380]: time="2024-02-09T18:38:15.191263516Z" level=info msg="StartContainer for \"2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9\" returns successfully" Feb 9 18:38:15.219477 env[1380]: time="2024-02-09T18:38:15.219429227Z" level=info msg="shim disconnected" id=2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9 Feb 9 18:38:15.219477 env[1380]: time="2024-02-09T18:38:15.219473388Z" level=warning msg="cleaning up after shim disconnected" id=2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9 namespace=k8s.io Feb 9 18:38:15.219477 env[1380]: time="2024-02-09T18:38:15.219483389Z" level=info msg="cleaning up dead shim" Feb 9 18:38:15.223984 kubelet[2503]: W0209 18:38:15.223919 2503 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cff46d2_a54b_4121_93e5_ddce147b6e13.slice/cri-containerd-14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1.scope WatchSource:0}: container "14546d90e38c75b22ce3d34ef15e967ed23e267d2c1dcbfeafb7564d9b6e68b1" in namespace "k8s.io": not found Feb 9 18:38:15.230542 env[1380]: time="2024-02-09T18:38:15.230501096Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4650 runtime=io.containerd.runc.v2\n" Feb 9 18:38:15.676175 kubelet[2503]: I0209 18:38:15.676139 2503 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=4cff46d2-a54b-4121-93e5-ddce147b6e13 path="/var/lib/kubelet/pods/4cff46d2-a54b-4121-93e5-ddce147b6e13/volumes" Feb 9 18:38:15.748208 systemd[1]: run-containerd-runc-k8s.io-2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9-runc.Qp4lUf.mount: Deactivated successfully. Feb 9 18:38:15.748301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9-rootfs.mount: Deactivated successfully. Feb 9 18:38:16.102074 env[1380]: time="2024-02-09T18:38:16.102034455Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 18:38:16.136591 env[1380]: time="2024-02-09T18:38:16.136538627Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7\"" Feb 9 18:38:16.137483 env[1380]: time="2024-02-09T18:38:16.137456379Z" level=info msg="StartContainer for \"0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7\"" Feb 9 18:38:16.157058 systemd[1]: Started cri-containerd-0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7.scope. Feb 9 18:38:16.186094 systemd[1]: cri-containerd-0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7.scope: Deactivated successfully. Feb 9 18:38:16.190715 env[1380]: time="2024-02-09T18:38:16.190664608Z" level=info msg="StartContainer for \"0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7\" returns successfully" Feb 9 18:38:16.219900 env[1380]: time="2024-02-09T18:38:16.219853274Z" level=info msg="shim disconnected" id=0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7 Feb 9 18:38:16.220143 env[1380]: time="2024-02-09T18:38:16.220124763Z" level=warning msg="cleaning up after shim disconnected" id=0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7 namespace=k8s.io Feb 9 18:38:16.220209 env[1380]: time="2024-02-09T18:38:16.220196526Z" level=info msg="cleaning up dead shim" Feb 9 18:38:16.227943 env[1380]: time="2024-02-09T18:38:16.227901076Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4708 runtime=io.containerd.runc.v2\n" Feb 9 18:38:16.673226 kubelet[2503]: E0209 18:38:16.673185 2503 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-m7hcs" podUID=61d7ef77-97e1-4e22-b57d-08b32f8520d9 Feb 9 18:38:16.748268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7-rootfs.mount: Deactivated successfully. Feb 9 18:38:17.106756 env[1380]: time="2024-02-09T18:38:17.106714626Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 18:38:17.140440 env[1380]: time="2024-02-09T18:38:17.140393329Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1\"" Feb 9 18:38:17.141375 env[1380]: time="2024-02-09T18:38:17.141348322Z" level=info msg="StartContainer for \"0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1\"" Feb 9 18:38:17.160849 systemd[1]: Started cri-containerd-0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1.scope. Feb 9 18:38:17.188984 systemd[1]: cri-containerd-0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1.scope: Deactivated successfully. Feb 9 18:38:17.192998 env[1380]: time="2024-02-09T18:38:17.192937333Z" level=info msg="StartContainer for \"0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1\" returns successfully" Feb 9 18:38:17.228119 env[1380]: time="2024-02-09T18:38:17.228072966Z" level=info msg="shim disconnected" id=0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1 Feb 9 18:38:17.228440 env[1380]: time="2024-02-09T18:38:17.228407618Z" level=warning msg="cleaning up after shim disconnected" id=0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1 namespace=k8s.io Feb 9 18:38:17.228527 env[1380]: time="2024-02-09T18:38:17.228513302Z" level=info msg="cleaning up dead shim" Feb 9 18:38:17.235991 env[1380]: time="2024-02-09T18:38:17.235955363Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4766 runtime=io.containerd.runc.v2\n" Feb 9 18:38:17.748311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1-rootfs.mount: Deactivated successfully. Feb 9 18:38:17.760293 kubelet[2503]: E0209 18:38:17.760265 2503 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 18:38:18.109491 env[1380]: time="2024-02-09T18:38:18.109389859Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 18:38:18.139033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098385518.mount: Deactivated successfully. Feb 9 18:38:18.155337 env[1380]: time="2024-02-09T18:38:18.155283789Z" level=info msg="CreateContainer within sandbox \"f2c900ee2d3edb1f78c674f6ed068ac4712e17ff1c79a33bb37b6938c726a380\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228\"" Feb 9 18:38:18.155969 env[1380]: time="2024-02-09T18:38:18.155944372Z" level=info msg="StartContainer for \"b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228\"" Feb 9 18:38:18.172396 systemd[1]: Started cri-containerd-b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228.scope. Feb 9 18:38:18.209087 env[1380]: time="2024-02-09T18:38:18.209040595Z" level=info msg="StartContainer for \"b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228\" returns successfully" Feb 9 18:38:18.335611 kubelet[2503]: W0209 18:38:18.334007 2503 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7dbd48_ced7_423f_9ced_6df5d8657cdf.slice/cri-containerd-8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738.scope WatchSource:0}: task 8e509c527b5f3cf92eea9c1a86882129de541dc6b29afb5ff42f68e8aed02738 not found: not found Feb 9 18:38:18.674017 kubelet[2503]: E0209 18:38:18.673880 2503 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-m7hcs" podUID=61d7ef77-97e1-4e22-b57d-08b32f8520d9 Feb 9 18:38:18.718712 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 18:38:20.042897 systemd[1]: run-containerd-runc-k8s.io-b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228-runc.rLGUd1.mount: Deactivated successfully. Feb 9 18:38:20.673011 kubelet[2503]: E0209 18:38:20.672972 2503 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-m7hcs" podUID=61d7ef77-97e1-4e22-b57d-08b32f8520d9 Feb 9 18:38:21.305035 systemd-networkd[1533]: lxc_health: Link UP Feb 9 18:38:21.317870 systemd-networkd[1533]: lxc_health: Gained carrier Feb 9 18:38:21.318707 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 18:38:21.442936 kubelet[2503]: W0209 18:38:21.442902 2503 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7dbd48_ced7_423f_9ced_6df5d8657cdf.slice/cri-containerd-2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9.scope WatchSource:0}: task 2e003ff853a16d5f6ea77792c4d9121d74cda5664d0a90e1e57ce4143cbfbda9 not found: not found Feb 9 18:38:22.223755 systemd[1]: run-containerd-runc-k8s.io-b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228-runc.RsSjuv.mount: Deactivated successfully. Feb 9 18:38:22.450286 kubelet[2503]: I0209 18:38:22.450251 2503 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-rzkvs" podStartSLOduration=8.450205744 pod.CreationTimestamp="2024-02-09 18:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:38:19.12417857 +0000 UTC m=+231.599035883" watchObservedRunningTime="2024-02-09 18:38:22.450205744 +0000 UTC m=+234.925063057" Feb 9 18:38:22.673101 kubelet[2503]: E0209 18:38:22.672992 2503 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-787d4945fb-m7hcs" podUID=61d7ef77-97e1-4e22-b57d-08b32f8520d9 Feb 9 18:38:23.220849 systemd-networkd[1533]: lxc_health: Gained IPv6LL Feb 9 18:38:24.413602 systemd[1]: run-containerd-runc-k8s.io-b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228-runc.NnNe6R.mount: Deactivated successfully. Feb 9 18:38:24.550129 kubelet[2503]: W0209 18:38:24.550078 2503 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7dbd48_ced7_423f_9ced_6df5d8657cdf.slice/cri-containerd-0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7.scope WatchSource:0}: task 0ccbd39ccba464a944e6ac5352e593f1e90b8257114ad901b08c992e0e7700c7 not found: not found Feb 9 18:38:26.560977 systemd[1]: run-containerd-runc-k8s.io-b7719438f81566ce484788032eb48b89282890bfea586e6ddb0fe4bc9366e228-runc.hJu94d.mount: Deactivated successfully. Feb 9 18:38:26.673884 sshd[4370]: pam_unix(sshd:session): session closed for user core Feb 9 18:38:26.677235 systemd-logind[1366]: Session 27 logged out. Waiting for processes to exit. Feb 9 18:38:26.677885 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 18:38:26.678809 systemd-logind[1366]: Removed session 27. Feb 9 18:38:26.679204 systemd[1]: sshd@24-10.200.20.32:22-10.200.12.6:46010.service: Deactivated successfully. Feb 9 18:38:27.643189 env[1380]: time="2024-02-09T18:38:27.643004883Z" level=info msg="StopPodSandbox for \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\"" Feb 9 18:38:27.643189 env[1380]: time="2024-02-09T18:38:27.643096166Z" level=info msg="TearDown network for sandbox \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" successfully" Feb 9 18:38:27.643189 env[1380]: time="2024-02-09T18:38:27.643128487Z" level=info msg="StopPodSandbox for \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" returns successfully" Feb 9 18:38:27.643583 env[1380]: time="2024-02-09T18:38:27.643465659Z" level=info msg="RemovePodSandbox for \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\"" Feb 9 18:38:27.643583 env[1380]: time="2024-02-09T18:38:27.643493340Z" level=info msg="Forcibly stopping sandbox \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\"" Feb 9 18:38:27.643583 env[1380]: time="2024-02-09T18:38:27.643558142Z" level=info msg="TearDown network for sandbox \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" successfully" Feb 9 18:38:27.652792 env[1380]: time="2024-02-09T18:38:27.652749343Z" level=info msg="RemovePodSandbox \"97903318ef16d72b46d1b8b8a2848c061708fc5ee96129ab745e40be398d16b6\" returns successfully" Feb 9 18:38:27.653320 env[1380]: time="2024-02-09T18:38:27.653291282Z" level=info msg="StopPodSandbox for \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\"" Feb 9 18:38:27.653405 env[1380]: time="2024-02-09T18:38:27.653363644Z" level=info msg="TearDown network for sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" successfully" Feb 9 18:38:27.653405 env[1380]: time="2024-02-09T18:38:27.653391445Z" level=info msg="StopPodSandbox for \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" returns successfully" Feb 9 18:38:27.653702 env[1380]: time="2024-02-09T18:38:27.653648094Z" level=info msg="RemovePodSandbox for \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\"" Feb 9 18:38:27.655745 env[1380]: time="2024-02-09T18:38:27.653809140Z" level=info msg="Forcibly stopping sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\"" Feb 9 18:38:27.655745 env[1380]: time="2024-02-09T18:38:27.653877662Z" level=info msg="TearDown network for sandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" successfully" Feb 9 18:38:27.661525 kubelet[2503]: W0209 18:38:27.661487 2503 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7dbd48_ced7_423f_9ced_6df5d8657cdf.slice/cri-containerd-0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1.scope WatchSource:0}: task 0fb8de6520025342b6a88e3f6741a0bdcf609f6db00fc94dc58f425039af3ef1 not found: not found Feb 9 18:38:27.672403 env[1380]: time="2024-02-09T18:38:27.672365946Z" level=info msg="RemovePodSandbox \"845c356d18e1c44f86ee4775e576f8d31d554eb78bb9ce79bce78da810294ca0\" returns successfully" Feb 9 18:38:27.672975 env[1380]: time="2024-02-09T18:38:27.672953287Z" level=info msg="StopPodSandbox for \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\"" Feb 9 18:38:27.673159 env[1380]: time="2024-02-09T18:38:27.673119452Z" level=info msg="TearDown network for sandbox \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" successfully" Feb 9 18:38:27.673230 env[1380]: time="2024-02-09T18:38:27.673214296Z" level=info msg="StopPodSandbox for \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" returns successfully" Feb 9 18:38:27.673554 env[1380]: time="2024-02-09T18:38:27.673533267Z" level=info msg="RemovePodSandbox for \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\"" Feb 9 18:38:27.673708 env[1380]: time="2024-02-09T18:38:27.673649311Z" level=info msg="Forcibly stopping sandbox \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\"" Feb 9 18:38:27.673830 env[1380]: time="2024-02-09T18:38:27.673810997Z" level=info msg="TearDown network for sandbox \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" successfully" Feb 9 18:38:27.681356 env[1380]: time="2024-02-09T18:38:27.681330659Z" level=info msg="RemovePodSandbox \"fbe2f1cc05ab22e92e279772738e587f502b75905ee2e42c9bfaef036f25d8b8\" returns successfully" Feb 9 18:38:41.719814 kubelet[2503]: E0209 18:38:41.719746 2503 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.32:54278->10.200.20.33:2379: read: connection timed out Feb 9 18:38:41.722079 systemd[1]: cri-containerd-d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d.scope: Deactivated successfully. Feb 9 18:38:41.722395 systemd[1]: cri-containerd-d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d.scope: Consumed 1.623s CPU time. Feb 9 18:38:41.741146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d-rootfs.mount: Deactivated successfully. Feb 9 18:38:41.784432 env[1380]: time="2024-02-09T18:38:41.784382975Z" level=info msg="shim disconnected" id=d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d Feb 9 18:38:41.784432 env[1380]: time="2024-02-09T18:38:41.784431096Z" level=warning msg="cleaning up after shim disconnected" id=d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d namespace=k8s.io Feb 9 18:38:41.784886 env[1380]: time="2024-02-09T18:38:41.784442097Z" level=info msg="cleaning up dead shim" Feb 9 18:38:41.791667 env[1380]: time="2024-02-09T18:38:41.791619105Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5445 runtime=io.containerd.runc.v2\n" Feb 9 18:38:41.855156 systemd[1]: cri-containerd-eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2.scope: Deactivated successfully. Feb 9 18:38:41.855448 systemd[1]: cri-containerd-eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2.scope: Consumed 3.384s CPU time. Feb 9 18:38:41.873749 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2-rootfs.mount: Deactivated successfully. Feb 9 18:38:41.888190 env[1380]: time="2024-02-09T18:38:41.888151000Z" level=info msg="shim disconnected" id=eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2 Feb 9 18:38:41.888551 env[1380]: time="2024-02-09T18:38:41.888530653Z" level=warning msg="cleaning up after shim disconnected" id=eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2 namespace=k8s.io Feb 9 18:38:41.888638 env[1380]: time="2024-02-09T18:38:41.888624536Z" level=info msg="cleaning up dead shim" Feb 9 18:38:41.896090 env[1380]: time="2024-02-09T18:38:41.896058033Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5470 runtime=io.containerd.runc.v2\n" Feb 9 18:38:42.149403 kubelet[2503]: I0209 18:38:42.149305 2503 scope.go:115] "RemoveContainer" containerID="eab601c443e2792a9a6cd49f90a4efcce4fabbbd69c60fc79d657eeddc79f4f2" Feb 9 18:38:42.151944 env[1380]: time="2024-02-09T18:38:42.151911349Z" level=info msg="CreateContainer within sandbox \"39e713e1edeefbbadd839ec3afa6300d1d6eedc7737341f39147d1af10310ba9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 18:38:42.152675 kubelet[2503]: I0209 18:38:42.152655 2503 scope.go:115] "RemoveContainer" containerID="d814a6d89d3ad96104600f1ab8c3dea5d6e75000c6869cbf265fa645b997927d" Feb 9 18:38:42.154232 env[1380]: time="2024-02-09T18:38:42.154202869Z" level=info msg="CreateContainer within sandbox \"bedf98605d8c58a93d5f1d4d30b7ac40d39c42b19255541d9a6e883e70040ec9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 18:38:42.173134 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396186602.mount: Deactivated successfully. Feb 9 18:38:42.188765 env[1380]: time="2024-02-09T18:38:42.188728541Z" level=info msg="CreateContainer within sandbox \"39e713e1edeefbbadd839ec3afa6300d1d6eedc7737341f39147d1af10310ba9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7bc1f30043f5b4d40df0210893fe17167835c386416ddba26e95ae0065ce64ef\"" Feb 9 18:38:42.189395 env[1380]: time="2024-02-09T18:38:42.189372203Z" level=info msg="StartContainer for \"7bc1f30043f5b4d40df0210893fe17167835c386416ddba26e95ae0065ce64ef\"" Feb 9 18:38:42.200585 env[1380]: time="2024-02-09T18:38:42.200543989Z" level=info msg="CreateContainer within sandbox \"bedf98605d8c58a93d5f1d4d30b7ac40d39c42b19255541d9a6e883e70040ec9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"003e0482cad4b3d14df6c9e8a785e6671dc070312edac8e10c71f09e7f0cf19f\"" Feb 9 18:38:42.201135 env[1380]: time="2024-02-09T18:38:42.201104448Z" level=info msg="StartContainer for \"003e0482cad4b3d14df6c9e8a785e6671dc070312edac8e10c71f09e7f0cf19f\"" Feb 9 18:38:42.208365 systemd[1]: Started cri-containerd-7bc1f30043f5b4d40df0210893fe17167835c386416ddba26e95ae0065ce64ef.scope. Feb 9 18:38:42.230775 systemd[1]: Started cri-containerd-003e0482cad4b3d14df6c9e8a785e6671dc070312edac8e10c71f09e7f0cf19f.scope. Feb 9 18:38:42.256998 env[1380]: time="2024-02-09T18:38:42.256950496Z" level=info msg="StartContainer for \"7bc1f30043f5b4d40df0210893fe17167835c386416ddba26e95ae0065ce64ef\" returns successfully" Feb 9 18:38:42.286192 env[1380]: time="2024-02-09T18:38:42.286149745Z" level=info msg="StartContainer for \"003e0482cad4b3d14df6c9e8a785e6671dc070312edac8e10c71f09e7f0cf19f\" returns successfully" Feb 9 18:38:42.742518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212177192.mount: Deactivated successfully. Feb 9 18:38:44.342621 kubelet[2503]: E0209 18:38:44.342508 2503 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b.17b245c34e3bb58f", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-37f6c6cc7b", UID:"3b88b5e37fc5df321a745b9e80ad9960", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-37f6c6cc7b"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 38, 33, 860314511, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 38, 33, 860314511, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.32:54082->10.200.20.33:2379: read: connection timed out' (will not retry!) Feb 9 18:38:51.720727 kubelet[2503]: E0209 18:38:51.720656 2503 controller.go:189] failed to update lease, error: Put "https://10.200.20.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-37f6c6cc7b?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)