Feb 9 18:31:57.117870 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Feb 9 18:31:57.117890 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 17:24:35 -00 2024 Feb 9 18:31:57.117897 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Feb 9 18:31:57.117904 kernel: printk: bootconsole [pl11] enabled Feb 9 18:31:57.117909 kernel: efi: EFI v2.70 by EDK II Feb 9 18:31:57.117915 kernel: efi: ACPI 2.0=0x3fd89018 SMBIOS=0x3fd66000 SMBIOS 3.0=0x3fd64000 MEMATTR=0x3ef2e698 RNG=0x3fd89998 MEMRESERVE=0x37e73f98 Feb 9 18:31:57.117923 kernel: random: crng init done Feb 9 18:31:57.117929 kernel: ACPI: Early table checksum verification disabled Feb 9 18:31:57.117934 kernel: ACPI: RSDP 0x000000003FD89018 000024 (v02 VRTUAL) Feb 9 18:31:57.117939 kernel: ACPI: XSDT 0x000000003FD89F18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117945 kernel: ACPI: FACP 0x000000003FD89C18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117952 kernel: ACPI: DSDT 0x000000003EBD2018 01DEC0 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 18:31:57.117957 kernel: ACPI: DBG2 0x000000003FD89B18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117962 kernel: ACPI: GTDT 0x000000003FD89D98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117969 kernel: ACPI: OEM0 0x000000003FD89098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117975 kernel: ACPI: SPCR 0x000000003FD89A98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117980 kernel: ACPI: APIC 0x000000003FD89818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117987 kernel: ACPI: SRAT 0x000000003FD89198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.117995 kernel: ACPI: PPTT 0x000000003FD89418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Feb 9 18:31:57.118001 kernel: ACPI: BGRT 0x000000003FD89E98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 18:31:57.118006 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Feb 9 18:31:57.118012 kernel: NUMA: Failed to initialise from firmware Feb 9 18:31:57.118018 kernel: NUMA: Faking a node at [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:31:57.118023 kernel: NUMA: NODE_DATA [mem 0x1bf7f2900-0x1bf7f7fff] Feb 9 18:31:57.118029 kernel: Zone ranges: Feb 9 18:31:57.118034 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Feb 9 18:31:57.118040 kernel: DMA32 empty Feb 9 18:31:57.118046 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:31:57.118052 kernel: Movable zone start for each node Feb 9 18:31:57.118058 kernel: Early memory node ranges Feb 9 18:31:57.118065 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Feb 9 18:31:57.118071 kernel: node 0: [mem 0x0000000000824000-0x000000003ec80fff] Feb 9 18:31:57.118077 kernel: node 0: [mem 0x000000003ec81000-0x000000003eca9fff] Feb 9 18:31:57.118082 kernel: node 0: [mem 0x000000003ecaa000-0x000000003fd29fff] Feb 9 18:31:57.118088 kernel: node 0: [mem 0x000000003fd2a000-0x000000003fd7dfff] Feb 9 18:31:57.118093 kernel: node 0: [mem 0x000000003fd7e000-0x000000003fd89fff] Feb 9 18:31:57.118099 kernel: node 0: [mem 0x000000003fd8a000-0x000000003fd8dfff] Feb 9 18:31:57.118104 kernel: node 0: [mem 0x000000003fd8e000-0x000000003fffffff] Feb 9 18:31:57.118110 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Feb 9 18:31:57.118117 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Feb 9 18:31:57.118125 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Feb 9 18:31:57.118131 kernel: psci: probing for conduit method from ACPI. Feb 9 18:31:57.118137 kernel: psci: PSCIv1.1 detected in firmware. Feb 9 18:31:57.118143 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 18:31:57.118152 kernel: psci: MIGRATE_INFO_TYPE not supported. Feb 9 18:31:57.118158 kernel: psci: SMC Calling Convention v1.4 Feb 9 18:31:57.118164 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node -1 Feb 9 18:31:57.118170 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node -1 Feb 9 18:31:57.118176 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 18:31:57.118182 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 18:31:57.118188 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 18:31:57.118195 kernel: Detected PIPT I-cache on CPU0 Feb 9 18:31:57.118201 kernel: CPU features: detected: GIC system register CPU interface Feb 9 18:31:57.118206 kernel: CPU features: detected: Hardware dirty bit management Feb 9 18:31:57.118212 kernel: CPU features: detected: Spectre-BHB Feb 9 18:31:57.118220 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 18:31:57.118228 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 18:31:57.118234 kernel: CPU features: detected: ARM erratum 1418040 Feb 9 18:31:57.118240 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Feb 9 18:31:57.118245 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Feb 9 18:31:57.118251 kernel: Policy zone: Normal Feb 9 18:31:57.118259 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:31:57.118265 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 18:31:57.118271 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 18:31:57.118277 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 18:31:57.118283 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 18:31:57.118290 kernel: software IO TLB: mapped [mem 0x000000003abd2000-0x000000003ebd2000] (64MB) Feb 9 18:31:57.118314 kernel: Memory: 3991936K/4194160K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 202224K reserved, 0K cma-reserved) Feb 9 18:31:57.118321 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 18:31:57.118327 kernel: trace event string verifier disabled Feb 9 18:31:57.118333 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 18:31:57.118340 kernel: rcu: RCU event tracing is enabled. Feb 9 18:31:57.118346 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 18:31:57.118352 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 18:31:57.118358 kernel: Tracing variant of Tasks RCU enabled. Feb 9 18:31:57.118364 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 18:31:57.118370 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 18:31:57.118381 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 18:31:57.118387 kernel: GICv3: 960 SPIs implemented Feb 9 18:31:57.118393 kernel: GICv3: 0 Extended SPIs implemented Feb 9 18:31:57.118399 kernel: GICv3: Distributor has no Range Selector support Feb 9 18:31:57.118405 kernel: Root IRQ handler: gic_handle_irq Feb 9 18:31:57.118410 kernel: GICv3: 16 PPIs implemented Feb 9 18:31:57.118416 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Feb 9 18:31:57.118422 kernel: ITS: No ITS available, not enabling LPIs Feb 9 18:31:57.118428 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:31:57.118434 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Feb 9 18:31:57.118440 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Feb 9 18:31:57.118447 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Feb 9 18:31:57.118456 kernel: Console: colour dummy device 80x25 Feb 9 18:31:57.118462 kernel: printk: console [tty1] enabled Feb 9 18:31:57.118468 kernel: ACPI: Core revision 20210730 Feb 9 18:31:57.118475 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Feb 9 18:31:57.118481 kernel: pid_max: default: 32768 minimum: 301 Feb 9 18:31:57.118487 kernel: LSM: Security Framework initializing Feb 9 18:31:57.118493 kernel: SELinux: Initializing. Feb 9 18:31:57.118499 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:31:57.118506 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 18:31:57.118513 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Feb 9 18:31:57.118519 kernel: Hyper-V: Host Build 10.0.22477.1369-1-0 Feb 9 18:31:57.118528 kernel: rcu: Hierarchical SRCU implementation. Feb 9 18:31:57.118534 kernel: Remapping and enabling EFI services. Feb 9 18:31:57.118540 kernel: smp: Bringing up secondary CPUs ... Feb 9 18:31:57.118546 kernel: Detected PIPT I-cache on CPU1 Feb 9 18:31:57.118552 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Feb 9 18:31:57.118559 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Feb 9 18:31:57.118565 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Feb 9 18:31:57.118572 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 18:31:57.118578 kernel: SMP: Total of 2 processors activated. Feb 9 18:31:57.118585 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 18:31:57.118593 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Feb 9 18:31:57.118600 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Feb 9 18:31:57.118606 kernel: CPU features: detected: CRC32 instructions Feb 9 18:31:57.118612 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Feb 9 18:31:57.118618 kernel: CPU features: detected: LSE atomic instructions Feb 9 18:31:57.118624 kernel: CPU features: detected: Privileged Access Never Feb 9 18:31:57.118632 kernel: CPU: All CPU(s) started at EL1 Feb 9 18:31:57.118638 kernel: alternatives: patching kernel code Feb 9 18:31:57.118648 kernel: devtmpfs: initialized Feb 9 18:31:57.118656 kernel: KASLR enabled Feb 9 18:31:57.118663 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 18:31:57.118670 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 18:31:57.118678 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 18:31:57.118685 kernel: SMBIOS 3.1.0 present. Feb 9 18:31:57.118691 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 18:31:57.118698 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 18:31:57.118706 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 18:31:57.118713 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 18:31:57.118719 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 18:31:57.118726 kernel: audit: initializing netlink subsys (disabled) Feb 9 18:31:57.118732 kernel: audit: type=2000 audit(0.089:1): state=initialized audit_enabled=0 res=1 Feb 9 18:31:57.118739 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 18:31:57.118747 kernel: cpuidle: using governor menu Feb 9 18:31:57.118754 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 18:31:57.118761 kernel: ASID allocator initialised with 32768 entries Feb 9 18:31:57.118767 kernel: ACPI: bus type PCI registered Feb 9 18:31:57.118774 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 18:31:57.118781 kernel: Serial: AMBA PL011 UART driver Feb 9 18:31:57.118787 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 18:31:57.118793 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 18:31:57.118800 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 18:31:57.118806 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 18:31:57.118816 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 18:31:57.118822 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 18:31:57.118828 kernel: ACPI: Added _OSI(Module Device) Feb 9 18:31:57.118835 kernel: ACPI: Added _OSI(Processor Device) Feb 9 18:31:57.118841 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 18:31:57.118848 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 18:31:57.118854 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 18:31:57.118861 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 18:31:57.118867 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 18:31:57.118875 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 18:31:57.118881 kernel: ACPI: Interpreter enabled Feb 9 18:31:57.118889 kernel: ACPI: Using GIC for interrupt routing Feb 9 18:31:57.118896 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Feb 9 18:31:57.118902 kernel: printk: console [ttyAMA0] enabled Feb 9 18:31:57.118909 kernel: printk: bootconsole [pl11] disabled Feb 9 18:31:57.118915 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Feb 9 18:31:57.118922 kernel: iommu: Default domain type: Translated Feb 9 18:31:57.118928 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 18:31:57.118936 kernel: vgaarb: loaded Feb 9 18:31:57.118942 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 18:31:57.118954 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 18:31:57.118963 kernel: PTP clock support registered Feb 9 18:31:57.118971 kernel: Registered efivars operations Feb 9 18:31:57.118977 kernel: No ACPI PMU IRQ for CPU0 Feb 9 18:31:57.118983 kernel: No ACPI PMU IRQ for CPU1 Feb 9 18:31:57.118990 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 18:31:57.118996 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 18:31:57.119004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 18:31:57.119011 kernel: pnp: PnP ACPI init Feb 9 18:31:57.119017 kernel: pnp: PnP ACPI: found 0 devices Feb 9 18:31:57.119027 kernel: NET: Registered PF_INET protocol family Feb 9 18:31:57.119035 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 18:31:57.119042 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 18:31:57.119049 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 18:31:57.119056 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 18:31:57.119063 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 18:31:57.119071 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 18:31:57.119077 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:31:57.119084 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 18:31:57.119093 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 18:31:57.119100 kernel: PCI: CLS 0 bytes, default 64 Feb 9 18:31:57.119106 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Feb 9 18:31:57.119114 kernel: kvm [1]: HYP mode not available Feb 9 18:31:57.119120 kernel: Initialise system trusted keyrings Feb 9 18:31:57.119127 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 18:31:57.119135 kernel: Key type asymmetric registered Feb 9 18:31:57.119141 kernel: Asymmetric key parser 'x509' registered Feb 9 18:31:57.119151 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 18:31:57.119157 kernel: io scheduler mq-deadline registered Feb 9 18:31:57.119164 kernel: io scheduler kyber registered Feb 9 18:31:57.119170 kernel: io scheduler bfq registered Feb 9 18:31:57.119177 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 18:31:57.119183 kernel: thunder_xcv, ver 1.0 Feb 9 18:31:57.119190 kernel: thunder_bgx, ver 1.0 Feb 9 18:31:57.119199 kernel: nicpf, ver 1.0 Feb 9 18:31:57.119208 kernel: nicvf, ver 1.0 Feb 9 18:31:57.119349 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 18:31:57.119422 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T18:31:56 UTC (1707503516) Feb 9 18:31:57.119431 kernel: efifb: probing for efifb Feb 9 18:31:57.119438 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 18:31:57.119448 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 18:31:57.119455 kernel: efifb: scrolling: redraw Feb 9 18:31:57.119464 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 18:31:57.119470 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:31:57.119477 kernel: fb0: EFI VGA frame buffer device Feb 9 18:31:57.119483 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Feb 9 18:31:57.119491 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 18:31:57.119498 kernel: NET: Registered PF_INET6 protocol family Feb 9 18:31:57.119508 kernel: Segment Routing with IPv6 Feb 9 18:31:57.119514 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 18:31:57.119521 kernel: NET: Registered PF_PACKET protocol family Feb 9 18:31:57.119529 kernel: Key type dns_resolver registered Feb 9 18:31:57.119535 kernel: registered taskstats version 1 Feb 9 18:31:57.119541 kernel: Loading compiled-in X.509 certificates Feb 9 18:31:57.119548 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 947a80114e81e2815f6db72a0d388260762488f9' Feb 9 18:31:57.119555 kernel: Key type .fscrypt registered Feb 9 18:31:57.119561 kernel: Key type fscrypt-provisioning registered Feb 9 18:31:57.119569 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 18:31:57.119579 kernel: ima: Allocated hash algorithm: sha1 Feb 9 18:31:57.119585 kernel: ima: No architecture policies found Feb 9 18:31:57.119593 kernel: Freeing unused kernel memory: 34688K Feb 9 18:31:57.119600 kernel: Run /init as init process Feb 9 18:31:57.119606 kernel: with arguments: Feb 9 18:31:57.119612 kernel: /init Feb 9 18:31:57.119619 kernel: with environment: Feb 9 18:31:57.119625 kernel: HOME=/ Feb 9 18:31:57.119631 kernel: TERM=linux Feb 9 18:31:57.119640 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 18:31:57.119648 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:31:57.119658 systemd[1]: Detected virtualization microsoft. Feb 9 18:31:57.119665 systemd[1]: Detected architecture arm64. Feb 9 18:31:57.119672 systemd[1]: Running in initrd. Feb 9 18:31:57.119682 systemd[1]: No hostname configured, using default hostname. Feb 9 18:31:57.119689 systemd[1]: Hostname set to . Feb 9 18:31:57.119696 systemd[1]: Initializing machine ID from random generator. Feb 9 18:31:57.119703 systemd[1]: Queued start job for default target initrd.target. Feb 9 18:31:57.119714 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:31:57.119721 systemd[1]: Reached target cryptsetup.target. Feb 9 18:31:57.119727 systemd[1]: Reached target paths.target. Feb 9 18:31:57.119734 systemd[1]: Reached target slices.target. Feb 9 18:31:57.119741 systemd[1]: Reached target swap.target. Feb 9 18:31:57.119751 systemd[1]: Reached target timers.target. Feb 9 18:31:57.119758 systemd[1]: Listening on iscsid.socket. Feb 9 18:31:57.119765 systemd[1]: Listening on iscsiuio.socket. Feb 9 18:31:57.119774 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:31:57.119781 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:31:57.119789 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:31:57.119796 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:31:57.119803 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:31:57.119813 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:31:57.119820 systemd[1]: Reached target sockets.target. Feb 9 18:31:57.119827 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:31:57.119834 systemd[1]: Finished network-cleanup.service. Feb 9 18:31:57.119843 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 18:31:57.119850 systemd[1]: Starting systemd-journald.service... Feb 9 18:31:57.119858 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:31:57.119868 systemd[1]: Starting systemd-resolved.service... Feb 9 18:31:57.119876 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 18:31:57.119886 systemd-journald[276]: Journal started Feb 9 18:31:57.119925 systemd-journald[276]: Runtime Journal (/run/log/journal/9abf4ba60960458bba39daa249e56b61) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:31:57.105282 systemd-modules-load[277]: Inserted module 'overlay' Feb 9 18:31:57.220965 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 18:31:57.220992 kernel: Bridge firewalling registered Feb 9 18:31:57.221018 kernel: SCSI subsystem initialized Feb 9 18:31:57.221027 systemd[1]: Started systemd-journald.service. Feb 9 18:31:57.221039 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 18:31:57.221049 kernel: device-mapper: uevent: version 1.0.3 Feb 9 18:31:57.221057 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 18:31:57.221065 kernel: audit: type=1130 audit(1707503517.201:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.140461 systemd-modules-load[277]: Inserted module 'br_netfilter' Feb 9 18:31:57.143814 systemd-resolved[278]: Positive Trust Anchors: Feb 9 18:31:57.260691 kernel: audit: type=1130 audit(1707503517.236:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.143822 systemd-resolved[278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:31:57.143849 systemd-resolved[278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:31:57.338119 kernel: audit: type=1130 audit(1707503517.277:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.146013 systemd-resolved[278]: Defaulting to hostname 'linux'. Feb 9 18:31:57.368661 kernel: audit: type=1130 audit(1707503517.343:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.343000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.201800 systemd-modules-load[277]: Inserted module 'dm_multipath' Feb 9 18:31:57.223370 systemd[1]: Started systemd-resolved.service. Feb 9 18:31:57.261370 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:31:57.418679 kernel: audit: type=1130 audit(1707503517.380:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.277762 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 18:31:57.446979 kernel: audit: type=1130 audit(1707503517.413:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.369678 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:31:57.381438 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 18:31:57.413589 systemd[1]: Reached target nss-lookup.target. Feb 9 18:31:57.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.447243 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 18:31:57.453592 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:31:57.552107 kernel: audit: type=1130 audit(1707503517.492:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.552128 kernel: audit: type=1130 audit(1707503517.517:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.463331 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:31:57.583379 kernel: audit: type=1130 audit(1707503517.547:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.478624 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:31:57.493523 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 18:31:57.594656 dracut-cmdline[299]: dracut-dracut-053 Feb 9 18:31:57.594656 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=680ffc8c0dfb23738bd19ec96ea37b5bbadfb5cebf23767d1d52c89a6d5c00b4 Feb 9 18:31:57.518457 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:31:57.548378 systemd[1]: Starting dracut-cmdline.service... Feb 9 18:31:57.659314 kernel: Loading iSCSI transport class v2.0-870. Feb 9 18:31:57.671313 kernel: iscsi: registered transport (tcp) Feb 9 18:31:57.691520 kernel: iscsi: registered transport (qla4xxx) Feb 9 18:31:57.691576 kernel: QLogic iSCSI HBA Driver Feb 9 18:31:57.721774 systemd[1]: Finished dracut-cmdline.service. Feb 9 18:31:57.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:57.727094 systemd[1]: Starting dracut-pre-udev.service... Feb 9 18:31:57.780319 kernel: raid6: neonx8 gen() 13828 MB/s Feb 9 18:31:57.802309 kernel: raid6: neonx8 xor() 10827 MB/s Feb 9 18:31:57.823310 kernel: raid6: neonx4 gen() 13541 MB/s Feb 9 18:31:57.846313 kernel: raid6: neonx4 xor() 11135 MB/s Feb 9 18:31:57.867308 kernel: raid6: neonx2 gen() 12997 MB/s Feb 9 18:31:57.889308 kernel: raid6: neonx2 xor() 10246 MB/s Feb 9 18:31:57.910308 kernel: raid6: neonx1 gen() 10500 MB/s Feb 9 18:31:57.931307 kernel: raid6: neonx1 xor() 8789 MB/s Feb 9 18:31:57.953309 kernel: raid6: int64x8 gen() 6294 MB/s Feb 9 18:31:57.974309 kernel: raid6: int64x8 xor() 3549 MB/s Feb 9 18:31:57.995308 kernel: raid6: int64x4 gen() 7265 MB/s Feb 9 18:31:58.018309 kernel: raid6: int64x4 xor() 3849 MB/s Feb 9 18:31:58.039309 kernel: raid6: int64x2 gen() 6152 MB/s Feb 9 18:31:58.060307 kernel: raid6: int64x2 xor() 3323 MB/s Feb 9 18:31:58.082315 kernel: raid6: int64x1 gen() 5047 MB/s Feb 9 18:31:58.107994 kernel: raid6: int64x1 xor() 2646 MB/s Feb 9 18:31:58.108004 kernel: raid6: using algorithm neonx8 gen() 13828 MB/s Feb 9 18:31:58.108012 kernel: raid6: .... xor() 10827 MB/s, rmw enabled Feb 9 18:31:58.114444 kernel: raid6: using neon recovery algorithm Feb 9 18:31:58.138523 kernel: xor: measuring software checksum speed Feb 9 18:31:58.138535 kernel: 8regs : 17300 MB/sec Feb 9 18:31:58.143356 kernel: 32regs : 20755 MB/sec Feb 9 18:31:58.147813 kernel: arm64_neon : 27911 MB/sec Feb 9 18:31:58.147822 kernel: xor: using function: arm64_neon (27911 MB/sec) Feb 9 18:31:58.210314 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 18:31:58.219161 systemd[1]: Finished dracut-pre-udev.service. Feb 9 18:31:58.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.228000 audit: BPF prog-id=7 op=LOAD Feb 9 18:31:58.228000 audit: BPF prog-id=8 op=LOAD Feb 9 18:31:58.229674 systemd[1]: Starting systemd-udevd.service... Feb 9 18:31:58.250026 systemd-udevd[476]: Using default interface naming scheme 'v252'. Feb 9 18:31:58.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.257675 systemd[1]: Started systemd-udevd.service. Feb 9 18:31:58.270381 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 18:31:58.281934 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation Feb 9 18:31:58.309109 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 18:31:58.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.315900 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:31:58.365211 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:31:58.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:31:58.428405 kernel: hv_vmbus: Vmbus version:5.3 Feb 9 18:31:58.437324 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 18:31:58.459542 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Feb 9 18:31:58.459609 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 18:31:58.459755 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 18:31:58.480405 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Feb 9 18:31:58.481323 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 18:31:58.491545 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 18:31:58.500423 kernel: scsi host0: storvsc_host_t Feb 9 18:31:58.500595 kernel: scsi host1: storvsc_host_t Feb 9 18:31:58.500618 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 18:31:58.518692 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 18:31:58.538553 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 18:31:58.538773 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 18:31:58.551668 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 18:31:58.551870 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 18:31:58.556495 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 18:31:58.564749 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 18:31:58.564939 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 18:31:58.565038 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 18:31:58.573331 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:31:58.585331 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 18:31:58.585501 kernel: hv_netvsc 00224878-4224-0022-4878-422400224878 eth0: VF slot 1 added Feb 9 18:31:58.598335 kernel: hv_vmbus: registering driver hv_pci Feb 9 18:31:58.610178 kernel: hv_pci f4068fd2-71d0-4749-8910-9792c818c342: PCI VMBus probing: Using version 0x10004 Feb 9 18:31:58.629459 kernel: hv_pci f4068fd2-71d0-4749-8910-9792c818c342: PCI host bridge to bus 71d0:00 Feb 9 18:31:58.629644 kernel: pci_bus 71d0:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Feb 9 18:31:58.629741 kernel: pci_bus 71d0:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 18:31:58.648387 kernel: pci 71d0:00:02.0: [15b3:1018] type 00 class 0x020000 Feb 9 18:31:58.662501 kernel: pci 71d0:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:31:58.686391 kernel: pci 71d0:00:02.0: enabling Extended Tags Feb 9 18:31:58.714291 kernel: pci 71d0:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 71d0:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Feb 9 18:31:58.714494 kernel: pci_bus 71d0:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 18:31:58.721950 kernel: pci 71d0:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Feb 9 18:31:58.764330 kernel: mlx5_core 71d0:00:02.0: firmware version: 16.30.1284 Feb 9 18:31:58.922322 kernel: mlx5_core 71d0:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0) Feb 9 18:31:58.982068 kernel: hv_netvsc 00224878-4224-0022-4878-422400224878 eth0: VF registering: eth1 Feb 9 18:31:58.982243 kernel: mlx5_core 71d0:00:02.0 eth1: joined to eth0 Feb 9 18:31:58.996815 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 18:31:59.016109 kernel: mlx5_core 71d0:00:02.0 enP29136s1: renamed from eth1 Feb 9 18:31:59.030745 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (537) Feb 9 18:31:59.044586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 18:31:59.216789 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 18:31:59.224454 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 18:31:59.241985 systemd[1]: Starting disk-uuid.service... Feb 9 18:31:59.265882 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 18:32:00.280802 disk-uuid[601]: The operation has completed successfully. Feb 9 18:32:00.288783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 18:32:00.349848 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 18:32:00.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.349948 systemd[1]: Finished disk-uuid.service. Feb 9 18:32:00.361526 systemd[1]: Starting verity-setup.service... Feb 9 18:32:00.408323 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 18:32:00.651665 systemd[1]: Found device dev-mapper-usr.device. Feb 9 18:32:00.658127 systemd[1]: Mounting sysusr-usr.mount... Feb 9 18:32:00.669373 systemd[1]: Finished verity-setup.service. Feb 9 18:32:00.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.726975 systemd[1]: Mounted sysusr-usr.mount. Feb 9 18:32:00.734950 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 18:32:00.731448 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 18:32:00.732181 systemd[1]: Starting ignition-setup.service... Feb 9 18:32:00.739951 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 18:32:00.781204 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:32:00.781280 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:32:00.786554 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:32:00.830336 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 18:32:00.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.839000 audit: BPF prog-id=9 op=LOAD Feb 9 18:32:00.841350 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:00.868391 systemd-networkd[844]: lo: Link UP Feb 9 18:32:00.868400 systemd-networkd[844]: lo: Gained carrier Feb 9 18:32:00.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.869114 systemd-networkd[844]: Enumeration completed Feb 9 18:32:00.872106 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:00.877032 systemd[1]: Reached target network.target. Feb 9 18:32:00.881170 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:00.894635 systemd[1]: Starting iscsiuio.service... Feb 9 18:32:00.903871 systemd[1]: Started iscsiuio.service. Feb 9 18:32:00.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.922589 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 18:32:00.926093 systemd[1]: Starting iscsid.service... Feb 9 18:32:00.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.934576 systemd[1]: Started iscsid.service. Feb 9 18:32:00.947389 iscsid[854]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:32:00.947389 iscsid[854]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 18:32:00.947389 iscsid[854]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 18:32:00.947389 iscsid[854]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 18:32:00.947389 iscsid[854]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 18:32:00.947389 iscsid[854]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 18:32:00.947389 iscsid[854]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 18:32:01.085592 kernel: kauditd_printk_skb: 15 callbacks suppressed Feb 9 18:32:01.085621 kernel: audit: type=1130 audit(1707503521.012:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:00.943126 systemd[1]: Starting dracut-initqueue.service... Feb 9 18:32:00.997634 systemd[1]: Finished dracut-initqueue.service. Feb 9 18:32:01.012607 systemd[1]: Reached target remote-fs-pre.target. Feb 9 18:32:01.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.047405 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:32:01.128761 kernel: audit: type=1130 audit(1707503521.101:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.060319 systemd[1]: Reached target remote-fs.target. Feb 9 18:32:01.074189 systemd[1]: Starting dracut-pre-mount.service... Feb 9 18:32:01.092590 systemd[1]: Finished dracut-pre-mount.service. Feb 9 18:32:01.149315 kernel: mlx5_core 71d0:00:02.0 enP29136s1: Link up Feb 9 18:32:01.191315 kernel: hv_netvsc 00224878-4224-0022-4878-422400224878 eth0: Data path switched to VF: enP29136s1 Feb 9 18:32:01.197911 systemd-networkd[844]: enP29136s1: Link UP Feb 9 18:32:01.202665 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:32:01.197993 systemd-networkd[844]: eth0: Link UP Feb 9 18:32:01.198108 systemd-networkd[844]: eth0: Gained carrier Feb 9 18:32:01.207701 systemd-networkd[844]: enP29136s1: Gained carrier Feb 9 18:32:01.220363 systemd-networkd[844]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:01.239252 systemd[1]: Finished ignition-setup.service. Feb 9 18:32:01.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.271865 kernel: audit: type=1130 audit(1707503521.244:28): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:01.267603 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 18:32:02.559442 systemd-networkd[844]: eth0: Gained IPv6LL Feb 9 18:32:04.201397 ignition[869]: Ignition 2.14.0 Feb 9 18:32:04.201408 ignition[869]: Stage: fetch-offline Feb 9 18:32:04.201462 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:04.201485 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:04.327014 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:04.327163 ignition[869]: parsed url from cmdline: "" Feb 9 18:32:04.327167 ignition[869]: no config URL provided Feb 9 18:32:04.327172 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:32:04.387775 kernel: audit: type=1130 audit(1707503524.356:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.346974 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 18:32:04.327181 ignition[869]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:32:04.358061 systemd[1]: Starting ignition-fetch.service... Feb 9 18:32:04.327186 ignition[869]: failed to fetch config: resource requires networking Feb 9 18:32:04.327517 ignition[869]: Ignition finished successfully Feb 9 18:32:04.394932 ignition[875]: Ignition 2.14.0 Feb 9 18:32:04.394940 ignition[875]: Stage: fetch Feb 9 18:32:04.395049 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:04.395069 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:04.402578 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:04.404908 ignition[875]: parsed url from cmdline: "" Feb 9 18:32:04.404913 ignition[875]: no config URL provided Feb 9 18:32:04.404921 ignition[875]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 18:32:04.404938 ignition[875]: no config at "/usr/lib/ignition/user.ign" Feb 9 18:32:04.404982 ignition[875]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 18:32:04.488746 ignition[875]: GET result: OK Feb 9 18:32:04.488861 ignition[875]: config has been read from IMDS userdata Feb 9 18:32:04.488928 ignition[875]: parsing config with SHA512: 6c4b7c42c1e2728457dcf07d4658c847f20ddef88b53c806eee0aafb7e301d59b2e1dfb6166852ef47d773050b4d45d6438e28f699fbf83dc8ee4fb9514f1d05 Feb 9 18:32:04.522048 unknown[875]: fetched base config from "system" Feb 9 18:32:04.522058 unknown[875]: fetched base config from "system" Feb 9 18:32:04.522708 ignition[875]: fetch: fetch complete Feb 9 18:32:04.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.522064 unknown[875]: fetched user config from "azure" Feb 9 18:32:04.562620 kernel: audit: type=1130 audit(1707503524.536:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.522713 ignition[875]: fetch: fetch passed Feb 9 18:32:04.531715 systemd[1]: Finished ignition-fetch.service. Feb 9 18:32:04.522753 ignition[875]: Ignition finished successfully Feb 9 18:32:04.537660 systemd[1]: Starting ignition-kargs.service... Feb 9 18:32:04.571453 ignition[881]: Ignition 2.14.0 Feb 9 18:32:04.571460 ignition[881]: Stage: kargs Feb 9 18:32:04.571595 ignition[881]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:04.571639 ignition[881]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:04.600004 ignition[881]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:04.601329 ignition[881]: kargs: kargs passed Feb 9 18:32:04.607994 systemd[1]: Finished ignition-kargs.service. Feb 9 18:32:04.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.601378 ignition[881]: Ignition finished successfully Feb 9 18:32:04.650996 ignition[887]: Ignition 2.14.0 Feb 9 18:32:04.657201 kernel: audit: type=1130 audit(1707503524.616:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.639693 systemd[1]: Starting ignition-disks.service... Feb 9 18:32:04.687421 kernel: audit: type=1130 audit(1707503524.666:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:04.651003 ignition[887]: Stage: disks Feb 9 18:32:04.661957 systemd[1]: Finished ignition-disks.service. Feb 9 18:32:04.651132 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:04.667109 systemd[1]: Reached target initrd-root-device.target. Feb 9 18:32:04.651154 ignition[887]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:04.692908 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:32:04.655918 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:04.700702 systemd[1]: Reached target local-fs.target. Feb 9 18:32:04.659943 ignition[887]: disks: disks passed Feb 9 18:32:04.709821 systemd[1]: Reached target sysinit.target. Feb 9 18:32:04.660004 ignition[887]: Ignition finished successfully Feb 9 18:32:04.720957 systemd[1]: Reached target basic.target. Feb 9 18:32:04.730991 systemd[1]: Starting systemd-fsck-root.service... Feb 9 18:32:05.143191 systemd-fsck[895]: ROOT: clean, 602/7326000 files, 481069/7359488 blocks Feb 9 18:32:05.152569 systemd[1]: Finished systemd-fsck-root.service. Feb 9 18:32:05.184406 kernel: audit: type=1130 audit(1707503525.157:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.158207 systemd[1]: Mounting sysroot.mount... Feb 9 18:32:05.199322 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 18:32:05.200393 systemd[1]: Mounted sysroot.mount. Feb 9 18:32:05.204493 systemd[1]: Reached target initrd-root-fs.target. Feb 9 18:32:05.257805 systemd[1]: Mounting sysroot-usr.mount... Feb 9 18:32:05.262653 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 18:32:05.270560 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 18:32:05.270601 systemd[1]: Reached target ignition-diskful.target. Feb 9 18:32:05.277320 systemd[1]: Mounted sysroot-usr.mount. Feb 9 18:32:05.329945 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:32:05.335509 systemd[1]: Starting initrd-setup-root.service... Feb 9 18:32:05.360942 initrd-setup-root[911]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 18:32:05.376563 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (906) Feb 9 18:32:05.376593 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:32:05.381714 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:32:05.386777 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:32:05.391108 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:32:05.402955 initrd-setup-root[937]: cut: /sysroot/etc/group: No such file or directory Feb 9 18:32:05.412687 initrd-setup-root[945]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 18:32:05.435136 initrd-setup-root[953]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 18:32:05.923517 systemd[1]: Finished initrd-setup-root.service. Feb 9 18:32:05.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.951191 systemd[1]: Starting ignition-mount.service... Feb 9 18:32:05.962443 kernel: audit: type=1130 audit(1707503525.928:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:05.962345 systemd[1]: Starting sysroot-boot.service... Feb 9 18:32:05.969843 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 18:32:05.969992 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 18:32:06.003542 systemd[1]: Finished sysroot-boot.service. Feb 9 18:32:06.033431 kernel: audit: type=1130 audit(1707503526.008:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.033501 ignition[974]: INFO : Ignition 2.14.0 Feb 9 18:32:06.033501 ignition[974]: INFO : Stage: mount Feb 9 18:32:06.033501 ignition[974]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:06.033501 ignition[974]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:06.033501 ignition[974]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:06.033501 ignition[974]: INFO : mount: mount passed Feb 9 18:32:06.033501 ignition[974]: INFO : Ignition finished successfully Feb 9 18:32:06.101974 kernel: audit: type=1130 audit(1707503526.035:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.022449 systemd[1]: Finished ignition-mount.service. Feb 9 18:32:06.445459 coreos-metadata[905]: Feb 09 18:32:06.445 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 18:32:06.454809 coreos-metadata[905]: Feb 09 18:32:06.454 INFO Fetch successful Feb 9 18:32:06.483183 coreos-metadata[905]: Feb 09 18:32:06.483 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 18:32:06.511060 coreos-metadata[905]: Feb 09 18:32:06.511 INFO Fetch successful Feb 9 18:32:06.517171 coreos-metadata[905]: Feb 09 18:32:06.516 INFO wrote hostname ci-3510.3.2-a-e8e52debc2 to /sysroot/etc/hostname Feb 9 18:32:06.526407 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 18:32:06.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.533626 systemd[1]: Starting ignition-files.service... Feb 9 18:32:06.569852 kernel: audit: type=1130 audit(1707503526.532:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:06.567567 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 18:32:06.593324 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (984) Feb 9 18:32:06.606419 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Feb 9 18:32:06.606436 kernel: BTRFS info (device sda6): using free space tree Feb 9 18:32:06.606445 kernel: BTRFS info (device sda6): has skinny extents Feb 9 18:32:06.615892 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 18:32:06.634518 ignition[1003]: INFO : Ignition 2.14.0 Feb 9 18:32:06.640327 ignition[1003]: INFO : Stage: files Feb 9 18:32:06.640327 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:06.640327 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:06.671564 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:06.679401 ignition[1003]: DEBUG : files: compiled without relabeling support, skipping Feb 9 18:32:06.686860 ignition[1003]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 18:32:06.686860 ignition[1003]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 18:32:06.778123 ignition[1003]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 18:32:06.787051 ignition[1003]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 18:32:06.800924 unknown[1003]: wrote ssh authorized keys file for user: core Feb 9 18:32:06.806992 ignition[1003]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 18:32:06.806992 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:32:06.806992 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 9 18:32:07.352043 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 18:32:07.498545 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 9 18:32:07.515925 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 9 18:32:07.515925 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:32:07.515925 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 9 18:32:07.690177 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 18:32:08.054610 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 9 18:32:08.067393 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:32:08.067393 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 18:32:08.067393 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:32:08.067393 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 9 18:32:08.405186 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 18:32:08.611285 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 9 18:32:08.631389 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 9 18:32:08.631389 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:32:08.631389 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 9 18:32:08.780116 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 18:32:09.479392 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 9 18:32:09.496608 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 18:32:09.496608 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:32:09.496608 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 9 18:32:09.542203 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 18:32:09.803618 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 9 18:32:09.822473 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 18:32:09.822473 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:32:09.822473 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 9 18:32:09.863149 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): GET result: OK Feb 9 18:32:10.163521 ignition[1003]: DEBUG : files: createFilesystemsFiles: createFiles: op(9): file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:32:10.181734 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:32:10.399511 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1003) Feb 9 18:32:10.399537 kernel: audit: type=1130 audit(1707503530.306:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4233856298" Feb 9 18:32:10.399598 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4233856298": device or resource busy Feb 9 18:32:10.399598 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4233856298", trying btrfs: device or resource busy Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4233856298" Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4233856298" Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem4233856298" Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem4233856298" Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588972931" Feb 9 18:32:10.399598 ignition[1003]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588972931": device or resource busy Feb 9 18:32:10.399598 ignition[1003]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2588972931", trying btrfs: device or resource busy Feb 9 18:32:10.399598 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588972931" Feb 9 18:32:10.656861 kernel: audit: type=1130 audit(1707503530.457:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.656901 kernel: audit: type=1131 audit(1707503530.457:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.656911 kernel: audit: type=1130 audit(1707503530.613:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.253717 systemd[1]: mnt-oem4233856298.mount: Deactivated successfully. Feb 9 18:32:10.663066 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588972931" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2588972931" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2588972931" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1a): [started] processing unit "containerd.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1a): op(1b): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1a): op(1b): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1a): [finished] processing unit "containerd.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 9 18:32:10.663066 ignition[1003]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:32:10.996162 kernel: audit: type=1130 audit(1707503530.679:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.996193 kernel: audit: type=1131 audit(1707503530.708:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.996233 kernel: audit: type=1130 audit(1707503530.801:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.996245 kernel: audit: type=1131 audit(1707503530.894:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.292476 systemd[1]: Finished ignition-files.service. Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(20): [started] processing unit "prepare-helm.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(20): op(21): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(20): op(21): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(20): [finished] processing unit "prepare-helm.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(23): [started] setting preset to enabled for "waagent.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(23): [finished] setting preset to enabled for "waagent.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(24): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(25): [started] setting preset to enabled for "prepare-critools.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(26): [started] setting preset to enabled for "prepare-helm.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: op(26): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: createResultFile: createFiles: op(27): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: createResultFile: createFiles: op(27): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 18:32:11.007064 ignition[1003]: INFO : files: files passed Feb 9 18:32:11.007064 ignition[1003]: INFO : Ignition finished successfully Feb 9 18:32:11.445822 kernel: audit: type=1131 audit(1707503531.115:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445860 kernel: audit: type=1131 audit(1707503531.161:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445870 kernel: audit: type=1131 audit(1707503531.191:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445880 kernel: audit: type=1131 audit(1707503531.224:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445890 kernel: audit: type=1131 audit(1707503531.255:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445905 kernel: audit: type=1131 audit(1707503531.307:51): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445918 kernel: audit: type=1131 audit(1707503531.362:52): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445927 kernel: audit: type=1131 audit(1707503531.393:53): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.445936 kernel: audit: type=1131 audit(1707503531.426:54): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.224000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.446163 initrd-setup-root-after-ignition[1028]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 18:32:11.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.310023 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 18:32:11.486571 kernel: audit: type=1130 audit(1707503531.452:55): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.486687 iscsid[854]: iscsid shutting down. Feb 9 18:32:11.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.339053 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 18:32:11.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.350250 systemd[1]: Starting ignition-quench.service... Feb 9 18:32:11.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.453347 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 18:32:11.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.453452 systemd[1]: Finished ignition-quench.service. Feb 9 18:32:10.607161 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 18:32:10.613755 systemd[1]: Reached target ignition-complete.target. Feb 9 18:32:11.567303 ignition[1041]: INFO : Ignition 2.14.0 Feb 9 18:32:11.567303 ignition[1041]: INFO : Stage: umount Feb 9 18:32:11.567303 ignition[1041]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 18:32:11.567303 ignition[1041]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 18:32:11.567303 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 18:32:11.567303 ignition[1041]: INFO : umount: umount passed Feb 9 18:32:11.567303 ignition[1041]: INFO : Ignition finished successfully Feb 9 18:32:11.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.641391 systemd[1]: Starting initrd-parse-etc.service... Feb 9 18:32:11.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.672552 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 18:32:10.672667 systemd[1]: Finished initrd-parse-etc.service. Feb 9 18:32:10.708717 systemd[1]: Reached target initrd-fs.target. Feb 9 18:32:11.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.724622 systemd[1]: Reached target initrd.target. Feb 9 18:32:10.761055 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 18:32:10.768076 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 18:32:11.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.793822 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 18:32:10.802542 systemd[1]: Starting initrd-cleanup.service... Feb 9 18:32:10.839417 systemd[1]: Stopped target nss-lookup.target. Feb 9 18:32:10.848824 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 18:32:11.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.867947 systemd[1]: Stopped target timers.target. Feb 9 18:32:11.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.880458 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 18:32:11.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.880525 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 18:32:10.926493 systemd[1]: Stopped target initrd.target. Feb 9 18:32:11.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:10.943945 systemd[1]: Stopped target basic.target. Feb 9 18:32:11.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.784000 audit: BPF prog-id=6 op=UNLOAD Feb 9 18:32:10.957024 systemd[1]: Stopped target ignition-complete.target. Feb 9 18:32:10.977596 systemd[1]: Stopped target ignition-diskful.target. Feb 9 18:32:11.002211 systemd[1]: Stopped target initrd-root-device.target. Feb 9 18:32:11.012816 systemd[1]: Stopped target remote-fs.target. Feb 9 18:32:11.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.029731 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 18:32:11.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.042670 systemd[1]: Stopped target sysinit.target. Feb 9 18:32:11.849385 kernel: hv_netvsc 00224878-4224-0022-4878-422400224878 eth0: Data path switched from VF: enP29136s1 Feb 9 18:32:11.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.054679 systemd[1]: Stopped target local-fs.target. Feb 9 18:32:11.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.070494 systemd[1]: Stopped target local-fs-pre.target. Feb 9 18:32:11.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.090085 systemd[1]: Stopped target swap.target. Feb 9 18:32:11.102344 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 18:32:11.102407 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 18:32:11.116015 systemd[1]: Stopped target cryptsetup.target. Feb 9 18:32:11.148881 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 18:32:11.148945 systemd[1]: Stopped dracut-initqueue.service. Feb 9 18:32:11.161819 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 18:32:11.161862 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 18:32:11.192260 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 18:32:11.192319 systemd[1]: Stopped ignition-files.service. Feb 9 18:32:11.224880 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 18:32:11.224923 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 18:32:11.280176 systemd[1]: Stopping ignition-mount.service... Feb 9 18:32:11.290368 systemd[1]: Stopping iscsid.service... Feb 9 18:32:11.302575 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 18:32:11.964000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:11.302665 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 18:32:11.341404 systemd[1]: Stopping sysroot-boot.service... Feb 9 18:32:11.358077 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 18:32:11.358150 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 18:32:11.363294 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 18:32:11.363373 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 18:32:11.394510 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 18:32:11.394641 systemd[1]: Stopped iscsid.service. Feb 9 18:32:11.426870 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 18:32:11.426954 systemd[1]: Finished initrd-cleanup.service. Feb 9 18:32:11.454173 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 18:32:12.038000 audit: BPF prog-id=5 op=UNLOAD Feb 9 18:32:12.038000 audit: BPF prog-id=4 op=UNLOAD Feb 9 18:32:12.038000 audit: BPF prog-id=3 op=UNLOAD Feb 9 18:32:12.038000 audit: BPF prog-id=8 op=UNLOAD Feb 9 18:32:12.038000 audit: BPF prog-id=7 op=UNLOAD Feb 9 18:32:11.454639 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 18:32:11.480428 systemd[1]: Stopped ignition-mount.service. Feb 9 18:32:11.494908 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 18:32:11.494970 systemd[1]: Stopped ignition-disks.service. Feb 9 18:32:11.503707 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 18:32:11.503755 systemd[1]: Stopped ignition-kargs.service. Feb 9 18:32:11.508915 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 18:32:11.508954 systemd[1]: Stopped ignition-fetch.service. Feb 9 18:32:11.524548 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 18:32:11.524593 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 18:32:11.534174 systemd[1]: Stopped target paths.target. Feb 9 18:32:11.546476 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 18:32:11.556326 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 18:32:11.562524 systemd[1]: Stopped target slices.target. Feb 9 18:32:11.571808 systemd[1]: Stopped target sockets.target. Feb 9 18:32:11.580908 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 18:32:12.090326 systemd-journald[276]: Received SIGTERM from PID 1 (n/a). Feb 9 18:32:11.580958 systemd[1]: Closed iscsid.socket. Feb 9 18:32:11.588875 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 18:32:11.588916 systemd[1]: Stopped ignition-setup.service. Feb 9 18:32:11.601274 systemd[1]: Stopping iscsiuio.service... Feb 9 18:32:11.626216 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 18:32:11.626336 systemd[1]: Stopped iscsiuio.service. Feb 9 18:32:11.634620 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 18:32:11.634700 systemd[1]: Stopped sysroot-boot.service. Feb 9 18:32:11.643835 systemd[1]: Stopped target network.target. Feb 9 18:32:11.652617 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 18:32:11.652651 systemd[1]: Closed iscsiuio.socket. Feb 9 18:32:11.661571 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 18:32:11.661611 systemd[1]: Stopped initrd-setup-root.service. Feb 9 18:32:11.670873 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:32:11.681197 systemd[1]: Stopping systemd-resolved.service... Feb 9 18:32:11.685031 systemd-networkd[844]: eth0: DHCPv6 lease lost Feb 9 18:32:12.090000 audit: BPF prog-id=9 op=UNLOAD Feb 9 18:32:11.690229 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:32:11.690341 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:32:11.702104 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 18:32:11.702137 systemd[1]: Closed systemd-networkd.socket. Feb 9 18:32:11.715574 systemd[1]: Stopping network-cleanup.service... Feb 9 18:32:11.724610 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 18:32:11.724679 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 18:32:11.733645 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 18:32:11.733705 systemd[1]: Stopped systemd-sysctl.service. Feb 9 18:32:11.748267 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 18:32:11.748326 systemd[1]: Stopped systemd-modules-load.service. Feb 9 18:32:11.753254 systemd[1]: Stopping systemd-udevd.service... Feb 9 18:32:11.763981 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 18:32:11.764652 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 18:32:11.764758 systemd[1]: Stopped systemd-resolved.service. Feb 9 18:32:11.772476 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 18:32:11.772608 systemd[1]: Stopped systemd-udevd.service. Feb 9 18:32:11.785183 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 18:32:11.785234 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 18:32:11.795398 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 18:32:11.795453 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 18:32:11.804541 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 18:32:11.804596 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 18:32:11.814554 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 18:32:11.814597 systemd[1]: Stopped dracut-cmdline.service. Feb 9 18:32:11.823853 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 18:32:11.823893 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 18:32:11.845532 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 18:32:11.855530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 18:32:11.855592 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 18:32:11.861654 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 18:32:11.861753 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 18:32:11.955923 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 18:32:11.956031 systemd[1]: Stopped network-cleanup.service. Feb 9 18:32:11.964712 systemd[1]: Reached target initrd-switch-root.target. Feb 9 18:32:11.975753 systemd[1]: Starting initrd-switch-root.service... Feb 9 18:32:12.036569 systemd[1]: Switching root. Feb 9 18:32:12.091699 systemd-journald[276]: Journal stopped Feb 9 18:32:24.670323 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 18:32:24.670344 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 18:32:24.670356 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 18:32:24.670366 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 18:32:24.670374 kernel: SELinux: policy capability open_perms=1 Feb 9 18:32:24.670382 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 18:32:24.670390 kernel: SELinux: policy capability always_check_network=0 Feb 9 18:32:24.670398 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 18:32:24.670406 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 18:32:24.670414 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 18:32:24.670423 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 18:32:24.670432 systemd[1]: Successfully loaded SELinux policy in 281.947ms. Feb 9 18:32:24.670442 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.882ms. Feb 9 18:32:24.670452 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 18:32:24.670463 systemd[1]: Detected virtualization microsoft. Feb 9 18:32:24.670472 systemd[1]: Detected architecture arm64. Feb 9 18:32:24.670481 systemd[1]: Detected first boot. Feb 9 18:32:24.670491 systemd[1]: Hostname set to . Feb 9 18:32:24.670500 systemd[1]: Initializing machine ID from random generator. Feb 9 18:32:24.670508 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 18:32:24.670516 kernel: kauditd_printk_skb: 32 callbacks suppressed Feb 9 18:32:24.670526 kernel: audit: type=1400 audit(1707503537.201:88): avc: denied { associate } for pid=1092 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 18:32:24.670538 kernel: audit: type=1300 audit(1707503537.201:88): arch=c00000b7 syscall=5 success=yes exit=0 a0=400018a5e4 a1=400018e7b0 a2=400019e680 a3=32 items=0 ppid=1075 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:24.670548 kernel: audit: type=1327 audit(1707503537.201:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:24.670557 kernel: audit: type=1400 audit(1707503537.215:89): avc: denied { associate } for pid=1092 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 18:32:24.670567 kernel: audit: type=1300 audit(1707503537.215:89): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400018a6c9 a2=1ed a3=0 items=2 ppid=1075 pid=1092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:24.670575 kernel: audit: type=1307 audit(1707503537.215:89): cwd="/" Feb 9 18:32:24.670585 kernel: audit: type=1302 audit(1707503537.215:89): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:24.670594 kernel: audit: type=1302 audit(1707503537.215:89): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:24.670603 kernel: audit: type=1327 audit(1707503537.215:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 18:32:24.670612 systemd[1]: Populated /etc with preset unit settings. Feb 9 18:32:24.670622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:32:24.670631 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:32:24.670641 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:32:24.670652 systemd[1]: Queued start job for default target multi-user.target. Feb 9 18:32:24.670661 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 18:32:24.670671 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 18:32:24.670680 systemd[1]: Created slice system-getty.slice. Feb 9 18:32:24.670689 systemd[1]: Created slice system-modprobe.slice. Feb 9 18:32:24.670699 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 18:32:24.670710 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 18:32:24.670721 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 18:32:24.670730 systemd[1]: Created slice user.slice. Feb 9 18:32:24.670739 systemd[1]: Started systemd-ask-password-console.path. Feb 9 18:32:24.670749 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 18:32:24.670758 systemd[1]: Set up automount boot.automount. Feb 9 18:32:24.670767 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 18:32:24.670776 systemd[1]: Reached target integritysetup.target. Feb 9 18:32:24.670786 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 18:32:24.670795 systemd[1]: Reached target remote-fs.target. Feb 9 18:32:24.670805 systemd[1]: Reached target slices.target. Feb 9 18:32:24.670814 systemd[1]: Reached target swap.target. Feb 9 18:32:24.670823 systemd[1]: Reached target torcx.target. Feb 9 18:32:24.670833 systemd[1]: Reached target veritysetup.target. Feb 9 18:32:24.670842 systemd[1]: Listening on systemd-coredump.socket. Feb 9 18:32:24.670851 systemd[1]: Listening on systemd-initctl.socket. Feb 9 18:32:24.670861 kernel: audit: type=1400 audit(1707503544.261:90): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 18:32:24.670872 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 18:32:24.670882 kernel: audit: type=1335 audit(1707503544.261:91): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:32:24.670891 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 18:32:24.670900 systemd[1]: Listening on systemd-journald.socket. Feb 9 18:32:24.670909 systemd[1]: Listening on systemd-networkd.socket. Feb 9 18:32:24.670918 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 18:32:24.670928 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 18:32:24.670938 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 18:32:24.670948 systemd[1]: Mounting dev-hugepages.mount... Feb 9 18:32:24.670958 systemd[1]: Mounting dev-mqueue.mount... Feb 9 18:32:24.670967 systemd[1]: Mounting media.mount... Feb 9 18:32:24.670976 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 18:32:24.670985 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 18:32:24.670994 systemd[1]: Mounting tmp.mount... Feb 9 18:32:24.671005 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 18:32:24.671014 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 18:32:24.671024 systemd[1]: Starting kmod-static-nodes.service... Feb 9 18:32:24.671033 systemd[1]: Starting modprobe@configfs.service... Feb 9 18:32:24.671042 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 18:32:24.671052 systemd[1]: Starting modprobe@drm.service... Feb 9 18:32:24.671062 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 18:32:24.671071 systemd[1]: Starting modprobe@fuse.service... Feb 9 18:32:24.671080 systemd[1]: Starting modprobe@loop.service... Feb 9 18:32:24.671091 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 18:32:24.671101 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 18:32:24.671111 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 18:32:24.671120 kernel: fuse: init (API version 7.34) Feb 9 18:32:24.671129 systemd[1]: Starting systemd-journald.service... Feb 9 18:32:24.671138 kernel: loop: module loaded Feb 9 18:32:24.671147 systemd[1]: Starting systemd-modules-load.service... Feb 9 18:32:24.671157 systemd[1]: Starting systemd-network-generator.service... Feb 9 18:32:24.671167 systemd[1]: Starting systemd-remount-fs.service... Feb 9 18:32:24.671176 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 18:32:24.671186 systemd[1]: Mounted dev-hugepages.mount. Feb 9 18:32:24.671195 kernel: audit: type=1305 audit(1707503544.663:92): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:32:24.671207 systemd-journald[1219]: Journal started Feb 9 18:32:24.671244 systemd-journald[1219]: Runtime Journal (/run/log/journal/a8f43956804741c685fb28e59bb774ca) is 8.0M, max 78.6M, 70.6M free. Feb 9 18:32:24.261000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 9 18:32:24.663000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 18:32:24.663000 audit[1219]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffef8cf360 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:24.710590 kernel: audit: type=1300 audit(1707503544.663:92): arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffef8cf360 a2=4000 a3=1 items=0 ppid=1 pid=1219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:24.715829 systemd[1]: Started systemd-journald.service. Feb 9 18:32:24.715875 kernel: audit: type=1327 audit(1707503544.663:92): proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:32:24.663000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 18:32:24.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.729674 systemd[1]: Mounted dev-mqueue.mount. Feb 9 18:32:24.748490 kernel: audit: type=1130 audit(1707503544.728:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.753556 systemd[1]: Mounted media.mount. Feb 9 18:32:24.757785 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 18:32:24.762778 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 18:32:24.767714 systemd[1]: Mounted tmp.mount. Feb 9 18:32:24.771877 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 18:32:24.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.777334 systemd[1]: Finished kmod-static-nodes.service. Feb 9 18:32:24.800722 kernel: audit: type=1130 audit(1707503544.776:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.801597 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 18:32:24.801818 systemd[1]: Finished modprobe@configfs.service. Feb 9 18:32:24.825385 kernel: audit: type=1130 audit(1707503544.800:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.826084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 18:32:24.826320 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 18:32:24.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.866338 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 18:32:24.866582 systemd[1]: Finished modprobe@drm.service. Feb 9 18:32:24.866993 kernel: audit: type=1130 audit(1707503544.825:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.867033 kernel: audit: type=1131 audit(1707503544.825:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.871817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 18:32:24.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.872237 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 18:32:24.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.879838 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 18:32:24.880114 systemd[1]: Finished modprobe@fuse.service. Feb 9 18:32:24.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.884997 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 18:32:24.885274 systemd[1]: Finished modprobe@loop.service. Feb 9 18:32:24.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.893559 systemd[1]: Finished systemd-modules-load.service. Feb 9 18:32:24.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.900634 systemd[1]: Finished systemd-network-generator.service. Feb 9 18:32:24.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.907148 systemd[1]: Finished systemd-remount-fs.service. Feb 9 18:32:24.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.912698 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 18:32:24.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:24.918471 systemd[1]: Reached target network-pre.target. Feb 9 18:32:24.925079 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 18:32:24.931133 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 18:32:24.935680 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 18:32:24.937469 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 18:32:24.943341 systemd[1]: Starting systemd-journal-flush.service... Feb 9 18:32:24.948208 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 18:32:24.949339 systemd[1]: Starting systemd-random-seed.service... Feb 9 18:32:24.954022 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 18:32:24.955024 systemd[1]: Starting systemd-sysctl.service... Feb 9 18:32:24.960166 systemd[1]: Starting systemd-sysusers.service... Feb 9 18:32:24.965444 systemd[1]: Starting systemd-udev-settle.service... Feb 9 18:32:24.971910 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 18:32:24.977443 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 18:32:24.985603 udevadm[1244]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 18:32:24.999641 systemd[1]: Finished systemd-random-seed.service. Feb 9 18:32:25.005427 systemd[1]: Reached target first-boot-complete.target. Feb 9 18:32:25.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:25.011092 systemd-journald[1219]: Time spent on flushing to /var/log/journal/a8f43956804741c685fb28e59bb774ca is 13.765ms for 1078 entries. Feb 9 18:32:25.011092 systemd-journald[1219]: System Journal (/var/log/journal/a8f43956804741c685fb28e59bb774ca) is 8.0M, max 2.6G, 2.6G free. Feb 9 18:32:25.070720 systemd-journald[1219]: Received client request to flush runtime journal. Feb 9 18:32:25.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:25.039940 systemd[1]: Finished systemd-sysctl.service. Feb 9 18:32:25.071678 systemd[1]: Finished systemd-journal-flush.service. Feb 9 18:32:25.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:25.657265 systemd[1]: Finished systemd-sysusers.service. Feb 9 18:32:25.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:25.663810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 18:32:25.952263 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 18:32:25.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:25.963192 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 18:32:25.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:25.970000 systemd[1]: Starting systemd-udevd.service... Feb 9 18:32:25.989240 systemd-udevd[1255]: Using default interface naming scheme 'v252'. Feb 9 18:32:26.259917 systemd[1]: Started systemd-udevd.service. Feb 9 18:32:26.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:26.271379 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:26.299355 systemd[1]: Found device dev-ttyAMA0.device. Feb 9 18:32:26.342596 systemd[1]: Starting systemd-userdbd.service... Feb 9 18:32:26.396000 audit[1270]: AVC avc: denied { confidentiality } for pid=1270 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 18:32:26.402957 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 18:32:26.403036 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 18:32:26.403055 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 18:32:26.403068 kernel: hv_vmbus: registering driver hv_balloon Feb 9 18:32:26.410767 kernel: hv_vmbus: registering driver hv_utils Feb 9 18:32:26.422548 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 18:32:26.422615 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 18:32:26.422630 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 18:32:26.432200 kernel: hv_balloon: Memory hot add disabled on ARM64 Feb 9 18:32:26.443903 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 18:32:26.443973 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 18:32:26.453569 kernel: Console: switching to colour dummy device 80x25 Feb 9 18:32:26.453643 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 18:32:26.420040 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 18:32:26.509089 systemd-journald[1219]: Time jumped backwards, rotating. Feb 9 18:32:26.396000 audit[1270]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=aaaac9aeb400 a1=aa2c a2=ffff970b24b0 a3=aaaac9a45010 items=12 ppid=1255 pid=1270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:26.396000 audit: CWD cwd="/" Feb 9 18:32:26.396000 audit: PATH item=0 name=(null) inode=5899 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=1 name=(null) inode=9852 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=2 name=(null) inode=9852 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=3 name=(null) inode=9853 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=4 name=(null) inode=9852 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=5 name=(null) inode=9854 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=6 name=(null) inode=9852 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=7 name=(null) inode=9855 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=8 name=(null) inode=9852 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=9 name=(null) inode=9856 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=10 name=(null) inode=9852 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PATH item=11 name=(null) inode=9857 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:32:26.396000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 18:32:26.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:26.426080 systemd[1]: Started systemd-userdbd.service. Feb 9 18:32:26.617974 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1272) Feb 9 18:32:26.647263 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 18:32:26.647617 systemd[1]: Finished systemd-udev-settle.service. Feb 9 18:32:26.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:26.654738 systemd[1]: Starting lvm2-activation-early.service... Feb 9 18:32:26.897114 systemd-networkd[1276]: lo: Link UP Feb 9 18:32:26.897127 systemd-networkd[1276]: lo: Gained carrier Feb 9 18:32:26.897538 systemd-networkd[1276]: Enumeration completed Feb 9 18:32:26.897691 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:26.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:26.903982 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:32:26.910204 lvm[1337]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:32:26.927057 systemd-networkd[1276]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:26.954027 systemd[1]: Finished lvm2-activation-early.service. Feb 9 18:32:26.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:26.959714 systemd[1]: Reached target cryptsetup.target. Feb 9 18:32:26.965834 systemd[1]: Starting lvm2-activation.service... Feb 9 18:32:26.969828 lvm[1340]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 18:32:26.975987 kernel: mlx5_core 71d0:00:02.0 enP29136s1: Link up Feb 9 18:32:26.994105 systemd[1]: Finished lvm2-activation.service. Feb 9 18:32:27.007014 kernel: hv_netvsc 00224878-4224-0022-4878-422400224878 eth0: Data path switched to VF: enP29136s1 Feb 9 18:32:27.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:27.007315 systemd[1]: Reached target local-fs-pre.target. Feb 9 18:32:27.007770 systemd-networkd[1276]: enP29136s1: Link UP Feb 9 18:32:27.007943 systemd-networkd[1276]: eth0: Link UP Feb 9 18:32:27.008032 systemd-networkd[1276]: eth0: Gained carrier Feb 9 18:32:27.013264 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 18:32:27.013290 systemd[1]: Reached target local-fs.target. Feb 9 18:32:27.018792 systemd[1]: Reached target machines.target. Feb 9 18:32:27.023881 systemd-networkd[1276]: enP29136s1: Gained carrier Feb 9 18:32:27.026143 systemd[1]: Starting ldconfig.service... Feb 9 18:32:27.030544 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 18:32:27.030719 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:27.031973 systemd[1]: Starting systemd-boot-update.service... Feb 9 18:32:27.037528 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 18:32:27.044516 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 18:32:27.045219 systemd-networkd[1276]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:27.049750 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:32:27.049816 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 18:32:27.051041 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 18:32:27.082714 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 18:32:27.417038 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1343 (bootctl) Feb 9 18:32:27.418304 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 18:32:27.425534 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 18:32:27.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:27.571541 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 18:32:27.572862 systemd-tmpfiles[1346]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 18:32:27.935377 systemd-fsck[1352]: fsck.fat 4.2 (2021-01-31) Feb 9 18:32:27.935377 systemd-fsck[1352]: /dev/sda1: 236 files, 113719/258078 clusters Feb 9 18:32:27.937270 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 18:32:27.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:27.947620 systemd[1]: Mounting boot.mount... Feb 9 18:32:28.003603 systemd[1]: Mounted boot.mount. Feb 9 18:32:28.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:28.013857 systemd[1]: Finished systemd-boot-update.service. Feb 9 18:32:28.155822 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 18:32:28.156687 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 18:32:28.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:28.243240 systemd-networkd[1276]: eth0: Gained IPv6LL Feb 9 18:32:28.249929 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:32:28.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.419139 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 18:32:29.424000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.426345 systemd[1]: Starting audit-rules.service... Feb 9 18:32:29.428521 kernel: kauditd_printk_skb: 47 callbacks suppressed Feb 9 18:32:29.428579 kernel: audit: type=1130 audit(1707503549.424:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.452567 systemd[1]: Starting clean-ca-certificates.service... Feb 9 18:32:29.459007 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 18:32:29.465932 systemd[1]: Starting systemd-resolved.service... Feb 9 18:32:29.471748 systemd[1]: Starting systemd-timesyncd.service... Feb 9 18:32:29.477844 systemd[1]: Starting systemd-update-utmp.service... Feb 9 18:32:29.483295 systemd[1]: Finished clean-ca-certificates.service. Feb 9 18:32:29.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.493712 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 18:32:29.507981 kernel: audit: type=1130 audit(1707503549.487:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.523000 audit[1371]: SYSTEM_BOOT pid=1371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.544005 systemd[1]: Finished systemd-update-utmp.service. Feb 9 18:32:29.549998 kernel: audit: type=1127 audit(1707503549.523:132): pid=1371 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.570985 kernel: audit: type=1130 audit(1707503549.550:133): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.641311 systemd[1]: Started systemd-timesyncd.service. Feb 9 18:32:29.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.654231 systemd[1]: Reached target time-set.target. Feb 9 18:32:29.669717 kernel: audit: type=1130 audit(1707503549.645:134): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.699232 systemd-resolved[1369]: Positive Trust Anchors: Feb 9 18:32:29.699593 systemd-resolved[1369]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 18:32:29.699670 systemd-resolved[1369]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 18:32:29.724493 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 18:32:29.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.749532 systemd-resolved[1369]: Using system hostname 'ci-3510.3.2-a-e8e52debc2'. Feb 9 18:32:29.749990 kernel: audit: type=1130 audit(1707503549.729:135): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.751246 systemd[1]: Started systemd-resolved.service. Feb 9 18:32:29.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.758813 systemd[1]: Reached target network.target. Feb 9 18:32:29.781103 kernel: audit: type=1130 audit(1707503549.756:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:32:29.781505 systemd[1]: Reached target network-online.target. Feb 9 18:32:29.786506 systemd[1]: Reached target nss-lookup.target. Feb 9 18:32:29.871000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:32:29.873108 augenrules[1388]: No rules Feb 9 18:32:29.874096 systemd[1]: Finished audit-rules.service. Feb 9 18:32:29.871000 audit[1388]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdeb18330 a2=420 a3=0 items=0 ppid=1364 pid=1388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:29.918378 kernel: audit: type=1305 audit(1707503549.871:137): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 18:32:29.918436 kernel: audit: type=1300 audit(1707503549.871:137): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdeb18330 a2=420 a3=0 items=0 ppid=1364 pid=1388 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:32:29.871000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:32:29.932932 kernel: audit: type=1327 audit(1707503549.871:137): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 18:32:30.031702 systemd-timesyncd[1370]: Contacted time server 5.78.89.3:123 (0.flatcar.pool.ntp.org). Feb 9 18:32:30.031759 systemd-timesyncd[1370]: Initial clock synchronization to Fri 2024-02-09 18:32:30.032868 UTC. Feb 9 18:32:35.141897 ldconfig[1342]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 18:32:35.154799 systemd[1]: Finished ldconfig.service. Feb 9 18:32:35.161637 systemd[1]: Starting systemd-update-done.service... Feb 9 18:32:35.198276 systemd[1]: Finished systemd-update-done.service. Feb 9 18:32:35.204680 systemd[1]: Reached target sysinit.target. Feb 9 18:32:35.210249 systemd[1]: Started motdgen.path. Feb 9 18:32:35.215263 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 18:32:35.223345 systemd[1]: Started logrotate.timer. Feb 9 18:32:35.228681 systemd[1]: Started mdadm.timer. Feb 9 18:32:35.233576 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 18:32:35.239440 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 18:32:35.239477 systemd[1]: Reached target paths.target. Feb 9 18:32:35.244764 systemd[1]: Reached target timers.target. Feb 9 18:32:35.250605 systemd[1]: Listening on dbus.socket. Feb 9 18:32:35.256779 systemd[1]: Starting docker.socket... Feb 9 18:32:35.275719 systemd[1]: Listening on sshd.socket. Feb 9 18:32:35.281049 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:35.281546 systemd[1]: Listening on docker.socket. Feb 9 18:32:35.287052 systemd[1]: Reached target sockets.target. Feb 9 18:32:35.292568 systemd[1]: Reached target basic.target. Feb 9 18:32:35.297694 systemd[1]: System is tainted: cgroupsv1 Feb 9 18:32:35.297745 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:32:35.297766 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 18:32:35.298910 systemd[1]: Starting containerd.service... Feb 9 18:32:35.304498 systemd[1]: Starting dbus.service... Feb 9 18:32:35.309899 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 18:32:35.316424 systemd[1]: Starting extend-filesystems.service... Feb 9 18:32:35.321480 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 18:32:35.322619 systemd[1]: Starting motdgen.service... Feb 9 18:32:35.327989 systemd[1]: Started nvidia.service. Feb 9 18:32:35.334054 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 18:32:35.340516 systemd[1]: Starting prepare-critools.service... Feb 9 18:32:35.347545 systemd[1]: Starting prepare-helm.service... Feb 9 18:32:35.353215 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 18:32:35.359196 systemd[1]: Starting sshd-keygen.service... Feb 9 18:32:35.365444 systemd[1]: Starting systemd-logind.service... Feb 9 18:32:35.370904 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 18:32:35.370999 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 18:32:35.372151 systemd[1]: Starting update-engine.service... Feb 9 18:32:35.378034 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 18:32:35.390764 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 18:32:35.391084 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 18:32:35.423598 jq[1426]: true Feb 9 18:32:35.423892 jq[1402]: false Feb 9 18:32:35.426696 extend-filesystems[1403]: Found sda Feb 9 18:32:35.432554 extend-filesystems[1403]: Found sda1 Feb 9 18:32:35.432554 extend-filesystems[1403]: Found sda2 Feb 9 18:32:35.432554 extend-filesystems[1403]: Found sda3 Feb 9 18:32:35.432554 extend-filesystems[1403]: Found usr Feb 9 18:32:35.432554 extend-filesystems[1403]: Found sda4 Feb 9 18:32:35.432554 extend-filesystems[1403]: Found sda6 Feb 9 18:32:35.432554 extend-filesystems[1403]: Found sda7 Feb 9 18:32:35.432554 extend-filesystems[1403]: Found sda9 Feb 9 18:32:35.432554 extend-filesystems[1403]: Checking size of /dev/sda9 Feb 9 18:32:35.447402 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 18:32:35.447664 systemd[1]: Finished motdgen.service. Feb 9 18:32:35.469478 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 18:32:35.509721 jq[1443]: true Feb 9 18:32:35.469761 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 18:32:35.496923 systemd-logind[1421]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Feb 9 18:32:35.497407 systemd-logind[1421]: New seat seat0. Feb 9 18:32:35.531209 tar[1428]: ./ Feb 9 18:32:35.531209 tar[1428]: ./macvlan Feb 9 18:32:35.532759 tar[1430]: linux-arm64/helm Feb 9 18:32:35.533869 tar[1429]: crictl Feb 9 18:32:35.558988 env[1440]: time="2024-02-09T18:32:35.558917139Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 18:32:35.560076 extend-filesystems[1403]: Old size kept for /dev/sda9 Feb 9 18:32:35.573815 extend-filesystems[1403]: Found sr0 Feb 9 18:32:35.560737 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 18:32:35.561031 systemd[1]: Finished extend-filesystems.service. Feb 9 18:32:35.664485 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Feb 9 18:32:35.665350 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 18:32:35.680187 tar[1428]: ./static Feb 9 18:32:35.685696 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 18:32:35.687784 dbus-daemon[1401]: [system] SELinux support is enabled Feb 9 18:32:35.687994 systemd[1]: Started dbus.service. Feb 9 18:32:35.694126 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 18:32:35.694159 systemd[1]: Reached target system-config.target. Feb 9 18:32:35.705377 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 18:32:35.705405 systemd[1]: Reached target user-config.target. Feb 9 18:32:35.715615 systemd[1]: Started systemd-logind.service. Feb 9 18:32:35.721283 dbus-daemon[1401]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 18:32:35.722615 env[1440]: time="2024-02-09T18:32:35.722571748Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 18:32:35.722753 env[1440]: time="2024-02-09T18:32:35.722729478Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:35.725486 env[1440]: time="2024-02-09T18:32:35.725434281Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:35.725486 env[1440]: time="2024-02-09T18:32:35.725477084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:35.725795 env[1440]: time="2024-02-09T18:32:35.725760221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:35.725795 env[1440]: time="2024-02-09T18:32:35.725788983Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:35.725862 env[1440]: time="2024-02-09T18:32:35.725803864Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 18:32:35.725862 env[1440]: time="2024-02-09T18:32:35.725814024Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:35.725902 env[1440]: time="2024-02-09T18:32:35.725886949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:35.726183 env[1440]: time="2024-02-09T18:32:35.726154645Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 18:32:35.726362 env[1440]: time="2024-02-09T18:32:35.726334736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 18:32:35.726362 env[1440]: time="2024-02-09T18:32:35.726357897Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 18:32:35.726428 env[1440]: time="2024-02-09T18:32:35.726415701Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 18:32:35.726453 env[1440]: time="2024-02-09T18:32:35.726428221Z" level=info msg="metadata content store policy set" policy=shared Feb 9 18:32:35.762929 env[1440]: time="2024-02-09T18:32:35.762881304Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 18:32:35.763049 env[1440]: time="2024-02-09T18:32:35.762978990Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 18:32:35.763049 env[1440]: time="2024-02-09T18:32:35.762997791Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 18:32:35.763107 env[1440]: time="2024-02-09T18:32:35.763047114Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763107 env[1440]: time="2024-02-09T18:32:35.763063475Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763165 env[1440]: time="2024-02-09T18:32:35.763143960Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763198 env[1440]: time="2024-02-09T18:32:35.763173002Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763611 env[1440]: time="2024-02-09T18:32:35.763588667Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763644 env[1440]: time="2024-02-09T18:32:35.763613748Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763644 env[1440]: time="2024-02-09T18:32:35.763639070Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763692 env[1440]: time="2024-02-09T18:32:35.763654031Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.763692 env[1440]: time="2024-02-09T18:32:35.763668032Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 18:32:35.763852 env[1440]: time="2024-02-09T18:32:35.763829962Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 18:32:35.763977 env[1440]: time="2024-02-09T18:32:35.763932728Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 18:32:35.764617 env[1440]: time="2024-02-09T18:32:35.764588847Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 18:32:35.764686 env[1440]: time="2024-02-09T18:32:35.764630530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764686 env[1440]: time="2024-02-09T18:32:35.764647051Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 18:32:35.764751 env[1440]: time="2024-02-09T18:32:35.764709375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764751 env[1440]: time="2024-02-09T18:32:35.764725256Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764794 env[1440]: time="2024-02-09T18:32:35.764758858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764794 env[1440]: time="2024-02-09T18:32:35.764772979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764794 env[1440]: time="2024-02-09T18:32:35.764787139Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764853 env[1440]: time="2024-02-09T18:32:35.764800580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764853 env[1440]: time="2024-02-09T18:32:35.764812381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764853 env[1440]: time="2024-02-09T18:32:35.764833182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.764853 env[1440]: time="2024-02-09T18:32:35.764849303Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 18:32:35.766191 env[1440]: time="2024-02-09T18:32:35.766158382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.766191 env[1440]: time="2024-02-09T18:32:35.766193944Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.766283 env[1440]: time="2024-02-09T18:32:35.766208545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.766283 env[1440]: time="2024-02-09T18:32:35.766232987Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 18:32:35.766283 env[1440]: time="2024-02-09T18:32:35.766249308Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 18:32:35.766283 env[1440]: time="2024-02-09T18:32:35.766261268Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 18:32:35.766283 env[1440]: time="2024-02-09T18:32:35.766280030Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 18:32:35.766385 env[1440]: time="2024-02-09T18:32:35.766324592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 18:32:35.766633 env[1440]: time="2024-02-09T18:32:35.766569647Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.766640611Z" level=info msg="Connect containerd service" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.766672893Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.769698996Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.769902808Z" level=info msg="Start subscribing containerd event" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.769945251Z" level=info msg="Start recovering state" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.770019696Z" level=info msg="Start event monitor" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.770046097Z" level=info msg="Start snapshots syncer" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.770056938Z" level=info msg="Start cni network conf syncer for default" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.770064058Z" level=info msg="Start streaming server" Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.770337675Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 18:32:35.781326 env[1440]: time="2024-02-09T18:32:35.770404399Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 18:32:35.770595 systemd[1]: Started containerd.service. Feb 9 18:32:35.784555 tar[1428]: ./vlan Feb 9 18:32:35.790875 env[1440]: time="2024-02-09T18:32:35.790739988Z" level=info msg="containerd successfully booted in 0.257609s" Feb 9 18:32:35.839087 tar[1428]: ./portmap Feb 9 18:32:35.873942 tar[1428]: ./host-local Feb 9 18:32:35.898309 tar[1428]: ./vrf Feb 9 18:32:35.924867 tar[1428]: ./bridge Feb 9 18:32:35.956162 tar[1428]: ./tuning Feb 9 18:32:36.015057 tar[1428]: ./firewall Feb 9 18:32:36.091047 tar[1428]: ./host-device Feb 9 18:32:36.123373 update_engine[1424]: I0209 18:32:36.110008 1424 main.cc:92] Flatcar Update Engine starting Feb 9 18:32:36.163330 tar[1428]: ./sbr Feb 9 18:32:36.176752 systemd[1]: Started update-engine.service. Feb 9 18:32:36.183656 systemd[1]: Started locksmithd.service. Feb 9 18:32:36.188349 update_engine[1424]: I0209 18:32:36.188069 1424 update_check_scheduler.cc:74] Next update check in 9m57s Feb 9 18:32:36.228440 tar[1428]: ./loopback Feb 9 18:32:36.292627 tar[1428]: ./dhcp Feb 9 18:32:36.450062 tar[1430]: linux-arm64/LICENSE Feb 9 18:32:36.450062 tar[1430]: linux-arm64/README.md Feb 9 18:32:36.457860 systemd[1]: Finished prepare-helm.service. Feb 9 18:32:36.472710 tar[1428]: ./ptp Feb 9 18:32:36.490464 systemd[1]: Finished prepare-critools.service. Feb 9 18:32:36.517413 tar[1428]: ./ipvlan Feb 9 18:32:36.549498 tar[1428]: ./bandwidth Feb 9 18:32:36.631415 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 18:32:37.839942 locksmithd[1518]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 18:32:38.806085 sshd_keygen[1425]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 18:32:38.830396 systemd[1]: Finished sshd-keygen.service. Feb 9 18:32:38.838099 systemd[1]: Starting issuegen.service... Feb 9 18:32:38.843765 systemd[1]: Started waagent.service. Feb 9 18:32:38.849898 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 18:32:38.850153 systemd[1]: Finished issuegen.service. Feb 9 18:32:38.857140 systemd[1]: Starting systemd-user-sessions.service... Feb 9 18:32:38.891272 systemd[1]: Finished systemd-user-sessions.service. Feb 9 18:32:38.898569 systemd[1]: Started getty@tty1.service. Feb 9 18:32:38.905306 systemd[1]: Started serial-getty@ttyAMA0.service. Feb 9 18:32:38.911574 systemd[1]: Reached target getty.target. Feb 9 18:32:38.916578 systemd[1]: Reached target multi-user.target. Feb 9 18:32:38.928464 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 18:32:38.937261 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 18:32:38.937495 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 18:32:38.943563 systemd[1]: Startup finished in 18.858s (kernel) + 24.257s (userspace) = 43.115s. Feb 9 18:32:39.571013 login[1551]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 18:32:39.571560 login[1550]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:32:39.596449 systemd[1]: Created slice user-500.slice. Feb 9 18:32:39.597435 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 18:32:39.599594 systemd-logind[1421]: New session 2 of user core. Feb 9 18:32:39.633769 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 18:32:39.634986 systemd[1]: Starting user@500.service... Feb 9 18:32:39.652782 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:32:39.858028 systemd[1557]: Queued start job for default target default.target. Feb 9 18:32:39.858256 systemd[1557]: Reached target paths.target. Feb 9 18:32:39.858270 systemd[1557]: Reached target sockets.target. Feb 9 18:32:39.858281 systemd[1557]: Reached target timers.target. Feb 9 18:32:39.858291 systemd[1557]: Reached target basic.target. Feb 9 18:32:39.858407 systemd[1]: Started user@500.service. Feb 9 18:32:39.859246 systemd[1]: Started session-2.scope. Feb 9 18:32:39.859469 systemd[1557]: Reached target default.target. Feb 9 18:32:39.859630 systemd[1557]: Startup finished in 201ms. Feb 9 18:32:40.572755 login[1551]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 18:32:40.576504 systemd-logind[1421]: New session 1 of user core. Feb 9 18:32:40.576871 systemd[1]: Started session-1.scope. Feb 9 18:32:44.005805 waagent[1547]: 2024-02-09T18:32:44.005692Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 18:32:44.012612 waagent[1547]: 2024-02-09T18:32:44.012530Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 18:32:44.017379 waagent[1547]: 2024-02-09T18:32:44.017317Z INFO Daemon Daemon Python: 3.9.16 Feb 9 18:32:44.022794 waagent[1547]: 2024-02-09T18:32:44.022685Z INFO Daemon Daemon Run daemon Feb 9 18:32:44.028274 waagent[1547]: 2024-02-09T18:32:44.028200Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 18:32:44.045519 waagent[1547]: 2024-02-09T18:32:44.045380Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:32:44.061515 waagent[1547]: 2024-02-09T18:32:44.061375Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:32:44.072775 waagent[1547]: 2024-02-09T18:32:44.072690Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:32:44.078353 waagent[1547]: 2024-02-09T18:32:44.078284Z INFO Daemon Daemon Using waagent for provisioning Feb 9 18:32:44.085161 waagent[1547]: 2024-02-09T18:32:44.085087Z INFO Daemon Daemon Activate resource disk Feb 9 18:32:44.092072 waagent[1547]: 2024-02-09T18:32:44.092005Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 18:32:44.107270 waagent[1547]: 2024-02-09T18:32:44.107189Z INFO Daemon Daemon Found device: None Feb 9 18:32:44.112080 waagent[1547]: 2024-02-09T18:32:44.112010Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 18:32:44.121166 waagent[1547]: 2024-02-09T18:32:44.121094Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 18:32:44.134073 waagent[1547]: 2024-02-09T18:32:44.134005Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:32:44.140328 waagent[1547]: 2024-02-09T18:32:44.140265Z INFO Daemon Daemon Running default provisioning handler Feb 9 18:32:44.153500 waagent[1547]: 2024-02-09T18:32:44.153369Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 18:32:44.169972 waagent[1547]: 2024-02-09T18:32:44.169827Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 18:32:44.180799 waagent[1547]: 2024-02-09T18:32:44.180720Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 18:32:44.186594 waagent[1547]: 2024-02-09T18:32:44.186516Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 18:32:44.261292 waagent[1547]: 2024-02-09T18:32:44.259669Z INFO Daemon Daemon Successfully mounted dvd Feb 9 18:32:44.293387 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 18:32:44.313532 waagent[1547]: 2024-02-09T18:32:44.313401Z INFO Daemon Daemon Detect protocol endpoint Feb 9 18:32:44.318897 waagent[1547]: 2024-02-09T18:32:44.318818Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 18:32:44.325344 waagent[1547]: 2024-02-09T18:32:44.325268Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 18:32:44.332709 waagent[1547]: 2024-02-09T18:32:44.332638Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 18:32:44.338752 waagent[1547]: 2024-02-09T18:32:44.338688Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 18:32:44.344248 waagent[1547]: 2024-02-09T18:32:44.344186Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 18:32:44.431283 waagent[1547]: 2024-02-09T18:32:44.431213Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 18:32:44.438943 waagent[1547]: 2024-02-09T18:32:44.438895Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 18:32:44.444657 waagent[1547]: 2024-02-09T18:32:44.444594Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 18:32:44.912325 waagent[1547]: 2024-02-09T18:32:44.912179Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 18:32:44.927972 waagent[1547]: 2024-02-09T18:32:44.927889Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 18:32:44.934146 waagent[1547]: 2024-02-09T18:32:44.934086Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 18:32:45.006492 waagent[1547]: 2024-02-09T18:32:45.006373Z INFO Daemon Daemon Found private key matching thumbprint 0D9BF437DDEDD4ADD1D2331CF8A120D853028092 Feb 9 18:32:45.016625 waagent[1547]: 2024-02-09T18:32:45.016530Z INFO Daemon Daemon Certificate with thumbprint 44389390C33E5ADBB9E2B197918B16FAD3636C2F has no matching private key. Feb 9 18:32:45.027980 waagent[1547]: 2024-02-09T18:32:45.027887Z INFO Daemon Daemon Fetch goal state completed Feb 9 18:32:45.054250 waagent[1547]: 2024-02-09T18:32:45.054195Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: f3695879-290a-48e3-9c15-7c869cc19694 New eTag: 4803100139081934790] Feb 9 18:32:45.065714 waagent[1547]: 2024-02-09T18:32:45.065643Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:32:45.081834 waagent[1547]: 2024-02-09T18:32:45.081756Z INFO Daemon Daemon Starting provisioning Feb 9 18:32:45.087228 waagent[1547]: 2024-02-09T18:32:45.087162Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 18:32:45.092405 waagent[1547]: 2024-02-09T18:32:45.092347Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-e8e52debc2] Feb 9 18:32:45.133208 waagent[1547]: 2024-02-09T18:32:45.133083Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-e8e52debc2] Feb 9 18:32:45.140064 waagent[1547]: 2024-02-09T18:32:45.139988Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 18:32:45.146688 waagent[1547]: 2024-02-09T18:32:45.146626Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 18:32:45.162832 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 18:32:45.163054 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 18:32:45.163109 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 18:32:45.163298 systemd[1]: Stopping systemd-networkd.service... Feb 9 18:32:45.168002 systemd-networkd[1276]: eth0: DHCPv6 lease lost Feb 9 18:32:45.169539 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 18:32:45.169777 systemd[1]: Stopped systemd-networkd.service. Feb 9 18:32:45.171805 systemd[1]: Starting systemd-networkd.service... Feb 9 18:32:45.203127 systemd-networkd[1604]: enP29136s1: Link UP Feb 9 18:32:45.203135 systemd-networkd[1604]: enP29136s1: Gained carrier Feb 9 18:32:45.204244 systemd-networkd[1604]: eth0: Link UP Feb 9 18:32:45.204253 systemd-networkd[1604]: eth0: Gained carrier Feb 9 18:32:45.204562 systemd-networkd[1604]: lo: Link UP Feb 9 18:32:45.204571 systemd-networkd[1604]: lo: Gained carrier Feb 9 18:32:45.204794 systemd-networkd[1604]: eth0: Gained IPv6LL Feb 9 18:32:45.205225 systemd-networkd[1604]: Enumeration completed Feb 9 18:32:45.205851 systemd[1]: Started systemd-networkd.service. Feb 9 18:32:45.206123 systemd-networkd[1604]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 18:32:45.207669 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 18:32:45.214974 waagent[1547]: 2024-02-09T18:32:45.208894Z INFO Daemon Daemon Create user account if not exists Feb 9 18:32:45.215648 waagent[1547]: 2024-02-09T18:32:45.215573Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 18:32:45.223543 waagent[1547]: 2024-02-09T18:32:45.223468Z INFO Daemon Daemon Configure sudoer Feb 9 18:32:45.229019 waagent[1547]: 2024-02-09T18:32:45.228934Z INFO Daemon Daemon Configure sshd Feb 9 18:32:45.233704 waagent[1547]: 2024-02-09T18:32:45.233642Z INFO Daemon Daemon Deploy ssh public key. Feb 9 18:32:45.240054 systemd-networkd[1604]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Feb 9 18:32:45.243878 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 18:32:46.471637 waagent[1547]: 2024-02-09T18:32:46.471553Z INFO Daemon Daemon Provisioning complete Feb 9 18:32:46.498606 waagent[1547]: 2024-02-09T18:32:46.498540Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 18:32:46.505870 waagent[1547]: 2024-02-09T18:32:46.505799Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 18:32:46.517637 waagent[1547]: 2024-02-09T18:32:46.517561Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 18:32:46.816650 waagent[1614]: 2024-02-09T18:32:46.816499Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 18:32:46.817369 waagent[1614]: 2024-02-09T18:32:46.817312Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:46.817505 waagent[1614]: 2024-02-09T18:32:46.817459Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:46.829535 waagent[1614]: 2024-02-09T18:32:46.829466Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 18:32:46.829706 waagent[1614]: 2024-02-09T18:32:46.829658Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 18:32:46.895660 waagent[1614]: 2024-02-09T18:32:46.895524Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0D9BF437DDEDD4ADD1D2331CF8A120D853028092 Feb 9 18:32:46.895861 waagent[1614]: 2024-02-09T18:32:46.895809Z INFO ExtHandler ExtHandler Certificate with thumbprint 44389390C33E5ADBB9E2B197918B16FAD3636C2F has no matching private key. Feb 9 18:32:46.896112 waagent[1614]: 2024-02-09T18:32:46.896062Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 18:32:46.908808 waagent[1614]: 2024-02-09T18:32:46.908756Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 75e0b9da-3539-4b2f-a0b2-482b91b0bc8a New eTag: 4803100139081934790] Feb 9 18:32:46.909401 waagent[1614]: 2024-02-09T18:32:46.909343Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 18:32:46.983823 waagent[1614]: 2024-02-09T18:32:46.983686Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:32:46.993976 waagent[1614]: 2024-02-09T18:32:46.993885Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1614 Feb 9 18:32:46.997727 waagent[1614]: 2024-02-09T18:32:46.997662Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:32:46.999072 waagent[1614]: 2024-02-09T18:32:46.999016Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:32:47.146014 waagent[1614]: 2024-02-09T18:32:47.145927Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:32:47.146437 waagent[1614]: 2024-02-09T18:32:47.146376Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:32:47.153842 waagent[1614]: 2024-02-09T18:32:47.153789Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:32:47.154506 waagent[1614]: 2024-02-09T18:32:47.154451Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:32:47.155747 waagent[1614]: 2024-02-09T18:32:47.155686Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 18:32:47.157359 waagent[1614]: 2024-02-09T18:32:47.157289Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:32:47.157609 waagent[1614]: 2024-02-09T18:32:47.157525Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:47.158007 waagent[1614]: 2024-02-09T18:32:47.157918Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:47.158995 waagent[1614]: 2024-02-09T18:32:47.158896Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:32:47.159343 waagent[1614]: 2024-02-09T18:32:47.159280Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:32:47.159343 waagent[1614]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:32:47.159343 waagent[1614]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:32:47.159343 waagent[1614]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:32:47.159343 waagent[1614]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:47.159343 waagent[1614]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:47.159343 waagent[1614]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:47.161504 waagent[1614]: 2024-02-09T18:32:47.161347Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:32:47.161854 waagent[1614]: 2024-02-09T18:32:47.161782Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:47.162601 waagent[1614]: 2024-02-09T18:32:47.162528Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:47.163186 waagent[1614]: 2024-02-09T18:32:47.163115Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:32:47.163344 waagent[1614]: 2024-02-09T18:32:47.163297Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:32:47.163457 waagent[1614]: 2024-02-09T18:32:47.163416Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:32:47.164369 waagent[1614]: 2024-02-09T18:32:47.164310Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:32:47.164520 waagent[1614]: 2024-02-09T18:32:47.164453Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:32:47.165282 waagent[1614]: 2024-02-09T18:32:47.165195Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:32:47.165459 waagent[1614]: 2024-02-09T18:32:47.165392Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:32:47.165757 waagent[1614]: 2024-02-09T18:32:47.165693Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:32:47.175467 waagent[1614]: 2024-02-09T18:32:47.175402Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 18:32:47.177727 waagent[1614]: 2024-02-09T18:32:47.177678Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:32:47.178721 waagent[1614]: 2024-02-09T18:32:47.178670Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 18:32:47.222891 waagent[1614]: 2024-02-09T18:32:47.222770Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1604' Feb 9 18:32:47.240663 waagent[1614]: 2024-02-09T18:32:47.240599Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 18:32:47.314260 waagent[1614]: 2024-02-09T18:32:47.312386Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:32:47.314260 waagent[1614]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:32:47.314260 waagent[1614]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:32:47.314260 waagent[1614]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:42:24 brd ff:ff:ff:ff:ff:ff Feb 9 18:32:47.314260 waagent[1614]: 3: enP29136s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:42:24 brd ff:ff:ff:ff:ff:ff\ altname enP29136p0s2 Feb 9 18:32:47.314260 waagent[1614]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:32:47.314260 waagent[1614]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:32:47.314260 waagent[1614]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:32:47.314260 waagent[1614]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:32:47.314260 waagent[1614]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:32:47.314260 waagent[1614]: 2: eth0 inet6 fe80::222:48ff:fe78:4224/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:32:47.365853 waagent[1614]: 2024-02-09T18:32:47.365791Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 18:32:47.521298 waagent[1547]: 2024-02-09T18:32:47.521146Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 18:32:47.525176 waagent[1547]: 2024-02-09T18:32:47.525124Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 18:32:48.647426 waagent[1642]: 2024-02-09T18:32:48.647325Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 18:32:48.648117 waagent[1642]: 2024-02-09T18:32:48.648058Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 18:32:48.648251 waagent[1642]: 2024-02-09T18:32:48.648206Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 18:32:48.655907 waagent[1642]: 2024-02-09T18:32:48.655789Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 18:32:48.656306 waagent[1642]: 2024-02-09T18:32:48.656250Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:48.656454 waagent[1642]: 2024-02-09T18:32:48.656407Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:48.668943 waagent[1642]: 2024-02-09T18:32:48.668877Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 18:32:48.681442 waagent[1642]: 2024-02-09T18:32:48.681387Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 18:32:48.682473 waagent[1642]: 2024-02-09T18:32:48.682413Z INFO ExtHandler Feb 9 18:32:48.682622 waagent[1642]: 2024-02-09T18:32:48.682574Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a81c481b-eca1-4180-8bd9-ab44ff38ec1c eTag: 4803100139081934790 source: Fabric] Feb 9 18:32:48.683362 waagent[1642]: 2024-02-09T18:32:48.683307Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 18:32:48.684552 waagent[1642]: 2024-02-09T18:32:48.684493Z INFO ExtHandler Feb 9 18:32:48.684725 waagent[1642]: 2024-02-09T18:32:48.684674Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 18:32:48.691077 waagent[1642]: 2024-02-09T18:32:48.691029Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 18:32:48.691525 waagent[1642]: 2024-02-09T18:32:48.691477Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 18:32:48.714401 waagent[1642]: 2024-02-09T18:32:48.714341Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 18:32:48.784529 waagent[1642]: 2024-02-09T18:32:48.784398Z INFO ExtHandler Downloaded certificate {'thumbprint': '44389390C33E5ADBB9E2B197918B16FAD3636C2F', 'hasPrivateKey': False} Feb 9 18:32:48.785562 waagent[1642]: 2024-02-09T18:32:48.785500Z INFO ExtHandler Downloaded certificate {'thumbprint': '0D9BF437DDEDD4ADD1D2331CF8A120D853028092', 'hasPrivateKey': True} Feb 9 18:32:48.786606 waagent[1642]: 2024-02-09T18:32:48.786547Z INFO ExtHandler Fetch goal state completed Feb 9 18:32:48.812607 waagent[1642]: 2024-02-09T18:32:48.812535Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1642 Feb 9 18:32:48.816107 waagent[1642]: 2024-02-09T18:32:48.816045Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 18:32:48.817630 waagent[1642]: 2024-02-09T18:32:48.817573Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 18:32:48.822447 waagent[1642]: 2024-02-09T18:32:48.822383Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 18:32:48.822841 waagent[1642]: 2024-02-09T18:32:48.822782Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 18:32:48.830490 waagent[1642]: 2024-02-09T18:32:48.830420Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 18:32:48.830997 waagent[1642]: 2024-02-09T18:32:48.830918Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 18:32:48.836901 waagent[1642]: 2024-02-09T18:32:48.836785Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 18:32:48.840595 waagent[1642]: 2024-02-09T18:32:48.840535Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 18:32:48.842118 waagent[1642]: 2024-02-09T18:32:48.842048Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 18:32:48.842913 waagent[1642]: 2024-02-09T18:32:48.842854Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:48.843217 waagent[1642]: 2024-02-09T18:32:48.843165Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:48.843867 waagent[1642]: 2024-02-09T18:32:48.843811Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 18:32:48.844268 waagent[1642]: 2024-02-09T18:32:48.844215Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 18:32:48.844268 waagent[1642]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 18:32:48.844268 waagent[1642]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 18:32:48.844268 waagent[1642]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 18:32:48.844268 waagent[1642]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:48.844268 waagent[1642]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:48.844268 waagent[1642]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 18:32:48.846801 waagent[1642]: 2024-02-09T18:32:48.846686Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 18:32:48.847458 waagent[1642]: 2024-02-09T18:32:48.847396Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 18:32:48.849628 waagent[1642]: 2024-02-09T18:32:48.849477Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 18:32:48.850467 waagent[1642]: 2024-02-09T18:32:48.850397Z INFO EnvHandler ExtHandler Configure routes Feb 9 18:32:48.850623 waagent[1642]: 2024-02-09T18:32:48.850575Z INFO EnvHandler ExtHandler Gateway:None Feb 9 18:32:48.850739 waagent[1642]: 2024-02-09T18:32:48.850695Z INFO EnvHandler ExtHandler Routes:None Feb 9 18:32:48.851721 waagent[1642]: 2024-02-09T18:32:48.851659Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 18:32:48.852076 waagent[1642]: 2024-02-09T18:32:48.852000Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 18:32:48.852657 waagent[1642]: 2024-02-09T18:32:48.852600Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 18:32:48.852862 waagent[1642]: 2024-02-09T18:32:48.852513Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 18:32:48.866938 waagent[1642]: 2024-02-09T18:32:48.866854Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 18:32:48.866938 waagent[1642]: Executing ['ip', '-a', '-o', 'link']: Feb 9 18:32:48.866938 waagent[1642]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 18:32:48.866938 waagent[1642]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:42:24 brd ff:ff:ff:ff:ff:ff Feb 9 18:32:48.866938 waagent[1642]: 3: enP29136s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:78:42:24 brd ff:ff:ff:ff:ff:ff\ altname enP29136p0s2 Feb 9 18:32:48.866938 waagent[1642]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 18:32:48.866938 waagent[1642]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 18:32:48.866938 waagent[1642]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 18:32:48.866938 waagent[1642]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 18:32:48.866938 waagent[1642]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 18:32:48.866938 waagent[1642]: 2: eth0 inet6 fe80::222:48ff:fe78:4224/64 scope link \ valid_lft forever preferred_lft forever Feb 9 18:32:48.870637 waagent[1642]: 2024-02-09T18:32:48.870479Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 18:32:48.878060 waagent[1642]: 2024-02-09T18:32:48.877931Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 18:32:48.879390 waagent[1642]: 2024-02-09T18:32:48.879322Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 18:32:48.903031 waagent[1642]: 2024-02-09T18:32:48.902927Z INFO ExtHandler ExtHandler Feb 9 18:32:48.903297 waagent[1642]: 2024-02-09T18:32:48.903240Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 3981f6bc-3ac9-4f72-a03f-642654ffdba0 correlation 1b471507-1829-402e-b4c7-80dce3104472 created: 2024-02-09T18:31:11.676529Z] Feb 9 18:32:48.904269 waagent[1642]: 2024-02-09T18:32:48.904214Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 18:32:48.906173 waagent[1642]: 2024-02-09T18:32:48.906120Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 18:32:48.931848 waagent[1642]: 2024-02-09T18:32:48.931783Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 18:32:48.955275 waagent[1642]: 2024-02-09T18:32:48.955136Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9FDC1AC9-B6CE-4BA5-B4A0-2ACAA7123BD4;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 18:32:49.104209 waagent[1642]: 2024-02-09T18:32:49.104055Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 18:32:49.104209 waagent[1642]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:49.104209 waagent[1642]: pkts bytes target prot opt in out source destination Feb 9 18:32:49.104209 waagent[1642]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:49.104209 waagent[1642]: pkts bytes target prot opt in out source destination Feb 9 18:32:49.104209 waagent[1642]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:49.104209 waagent[1642]: pkts bytes target prot opt in out source destination Feb 9 18:32:49.104209 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:32:49.104209 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:32:49.104209 waagent[1642]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:32:49.112275 waagent[1642]: 2024-02-09T18:32:49.112127Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 18:32:49.112275 waagent[1642]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:49.112275 waagent[1642]: pkts bytes target prot opt in out source destination Feb 9 18:32:49.112275 waagent[1642]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:49.112275 waagent[1642]: pkts bytes target prot opt in out source destination Feb 9 18:32:49.112275 waagent[1642]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 18:32:49.112275 waagent[1642]: pkts bytes target prot opt in out source destination Feb 9 18:32:49.112275 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 18:32:49.112275 waagent[1642]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 18:32:49.112275 waagent[1642]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 18:32:49.112880 waagent[1642]: 2024-02-09T18:32:49.112818Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 18:33:14.502804 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Feb 9 18:33:18.781944 systemd[1]: Created slice system-sshd.slice. Feb 9 18:33:18.783150 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.12.6:36702.service. Feb 9 18:33:19.422449 sshd[1691]: Accepted publickey for core from 10.200.12.6 port 36702 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:19.445108 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:19.449075 systemd[1]: Started session-3.scope. Feb 9 18:33:19.449247 systemd-logind[1421]: New session 3 of user core. Feb 9 18:33:19.808737 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.12.6:36706.service. Feb 9 18:33:20.222102 sshd[1696]: Accepted publickey for core from 10.200.12.6 port 36706 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:20.223627 sshd[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:20.226997 systemd-logind[1421]: New session 4 of user core. Feb 9 18:33:20.227507 systemd[1]: Started session-4.scope. Feb 9 18:33:20.522972 sshd[1696]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:20.525374 systemd[1]: sshd@1-10.200.20.40:22-10.200.12.6:36706.service: Deactivated successfully. Feb 9 18:33:20.526119 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 18:33:20.527016 systemd-logind[1421]: Session 4 logged out. Waiting for processes to exit. Feb 9 18:33:20.527893 systemd-logind[1421]: Removed session 4. Feb 9 18:33:20.601135 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.12.6:36710.service. Feb 9 18:33:21.048447 sshd[1703]: Accepted publickey for core from 10.200.12.6 port 36710 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:21.049659 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:21.053377 systemd-logind[1421]: New session 5 of user core. Feb 9 18:33:21.053763 systemd[1]: Started session-5.scope. Feb 9 18:33:21.368498 sshd[1703]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:21.371188 systemd[1]: sshd@2-10.200.20.40:22-10.200.12.6:36710.service: Deactivated successfully. Feb 9 18:33:21.372095 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 18:33:21.372114 systemd-logind[1421]: Session 5 logged out. Waiting for processes to exit. Feb 9 18:33:21.373000 systemd-logind[1421]: Removed session 5. Feb 9 18:33:21.436235 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.12.6:36720.service. Feb 9 18:33:21.829271 update_engine[1424]: I0209 18:33:21.828928 1424 update_attempter.cc:509] Updating boot flags... Feb 9 18:33:21.849835 sshd[1710]: Accepted publickey for core from 10.200.12.6 port 36720 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:21.851353 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:21.854921 systemd-logind[1421]: New session 6 of user core. Feb 9 18:33:21.855358 systemd[1]: Started session-6.scope. Feb 9 18:33:22.151515 sshd[1710]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:22.153763 systemd[1]: sshd@3-10.200.20.40:22-10.200.12.6:36720.service: Deactivated successfully. Feb 9 18:33:22.154462 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 18:33:22.155505 systemd-logind[1421]: Session 6 logged out. Waiting for processes to exit. Feb 9 18:33:22.156293 systemd-logind[1421]: Removed session 6. Feb 9 18:33:22.228125 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.12.6:36724.service. Feb 9 18:33:22.683125 sshd[1756]: Accepted publickey for core from 10.200.12.6 port 36724 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:22.684644 sshd[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:22.688628 systemd[1]: Started session-7.scope. Feb 9 18:33:22.689686 systemd-logind[1421]: New session 7 of user core. Feb 9 18:33:23.236487 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 9 18:33:23.236687 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:33:23.265832 dbus-daemon[1401]: avc: received setenforce notice (enforcing=1) Feb 9 18:33:23.265993 sudo[1760]: pam_unix(sudo:session): session closed for user root Feb 9 18:33:23.353531 sshd[1756]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:23.356546 systemd[1]: sshd@4-10.200.20.40:22-10.200.12.6:36724.service: Deactivated successfully. Feb 9 18:33:23.357545 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 18:33:23.357937 systemd-logind[1421]: Session 7 logged out. Waiting for processes to exit. Feb 9 18:33:23.358703 systemd-logind[1421]: Removed session 7. Feb 9 18:33:23.426652 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.12.6:36730.service. Feb 9 18:33:23.874043 sshd[1764]: Accepted publickey for core from 10.200.12.6 port 36730 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:23.878281 sshd[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:23.882112 systemd-logind[1421]: New session 8 of user core. Feb 9 18:33:23.882452 systemd[1]: Started session-8.scope. Feb 9 18:33:24.127090 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 9 18:33:24.127744 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:33:24.130281 sudo[1769]: pam_unix(sudo:session): session closed for user root Feb 9 18:33:24.134215 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 9 18:33:24.134410 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:33:24.142048 systemd[1]: Stopping audit-rules.service... Feb 9 18:33:24.142000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 18:33:24.154022 auditctl[1772]: No rules Feb 9 18:33:24.142000 audit[1772]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffffeb29f0 a2=420 a3=0 items=0 ppid=1 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:24.154552 systemd[1]: audit-rules.service: Deactivated successfully. Feb 9 18:33:24.154775 systemd[1]: Stopped audit-rules.service. Feb 9 18:33:24.156426 systemd[1]: Starting audit-rules.service... Feb 9 18:33:24.179897 kernel: audit: type=1305 audit(1707503604.142:138): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 9 18:33:24.180017 kernel: audit: type=1300 audit(1707503604.142:138): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffffeb29f0 a2=420 a3=0 items=0 ppid=1 pid=1772 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:24.142000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 9 18:33:24.187621 kernel: audit: type=1327 audit(1707503604.142:138): proctitle=2F7362696E2F617564697463746C002D44 Feb 9 18:33:24.187727 kernel: audit: type=1131 audit(1707503604.153:139): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.196640 augenrules[1790]: No rules Feb 9 18:33:24.197661 systemd[1]: Finished audit-rules.service. Feb 9 18:33:24.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.207397 sudo[1768]: pam_unix(sudo:session): session closed for user root Feb 9 18:33:24.223582 kernel: audit: type=1130 audit(1707503604.196:140): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.206000 audit[1768]: USER_END pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.242617 kernel: audit: type=1106 audit(1707503604.206:141): pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.242668 kernel: audit: type=1104 audit(1707503604.206:142): pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.206000 audit[1768]: CRED_DISP pid=1768 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.293140 sshd[1764]: pam_unix(sshd:session): session closed for user core Feb 9 18:33:24.293000 audit[1764]: USER_END pid=1764 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:24.296030 systemd-logind[1421]: Session 8 logged out. Waiting for processes to exit. Feb 9 18:33:24.296722 systemd[1]: sshd@5-10.200.20.40:22-10.200.12.6:36730.service: Deactivated successfully. Feb 9 18:33:24.297455 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 18:33:24.298435 systemd-logind[1421]: Removed session 8. Feb 9 18:33:24.293000 audit[1764]: CRED_DISP pid=1764 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:24.335145 kernel: audit: type=1106 audit(1707503604.293:143): pid=1764 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:24.335213 kernel: audit: type=1104 audit(1707503604.293:144): pid=1764 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:24.335244 kernel: audit: type=1131 audit(1707503604.295:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.40:22-10.200.12.6:36730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.200.20.40:22-10.200.12.6:36730 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.367815 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.12.6:36740.service. Feb 9 18:33:24.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.40:22-10.200.12.6:36740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:24.816000 audit[1797]: USER_ACCT pid=1797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:24.817590 sshd[1797]: Accepted publickey for core from 10.200.12.6 port 36740 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:33:24.817000 audit[1797]: CRED_ACQ pid=1797 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:24.817000 audit[1797]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8ab0e00 a2=3 a3=1 items=0 ppid=1 pid=1797 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:24.817000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:33:24.819109 sshd[1797]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:33:24.822881 systemd-logind[1421]: New session 9 of user core. Feb 9 18:33:24.823289 systemd[1]: Started session-9.scope. Feb 9 18:33:24.826000 audit[1797]: USER_START pid=1797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:24.827000 audit[1800]: CRED_ACQ pid=1800 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:33:25.071000 audit[1801]: USER_ACCT pid=1801 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:33:25.072366 sudo[1801]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 18:33:25.071000 audit[1801]: CRED_REFR pid=1801 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:33:25.072855 sudo[1801]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 18:33:25.073000 audit[1801]: USER_START pid=1801 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:33:25.723704 systemd[1]: Starting docker.service... Feb 9 18:33:25.777212 env[1817]: time="2024-02-09T18:33:25.777156416Z" level=info msg="Starting up" Feb 9 18:33:25.778755 env[1817]: time="2024-02-09T18:33:25.778726619Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:33:25.778856 env[1817]: time="2024-02-09T18:33:25.778842580Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:33:25.778923 env[1817]: time="2024-02-09T18:33:25.778908300Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:33:25.779020 env[1817]: time="2024-02-09T18:33:25.779005140Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:33:25.780782 env[1817]: time="2024-02-09T18:33:25.780749824Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 18:33:25.780782 env[1817]: time="2024-02-09T18:33:25.780775464Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 18:33:25.780891 env[1817]: time="2024-02-09T18:33:25.780792264Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 18:33:25.780891 env[1817]: time="2024-02-09T18:33:25.780803904Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 18:33:25.786843 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2470081121-merged.mount: Deactivated successfully. Feb 9 18:33:25.947346 env[1817]: time="2024-02-09T18:33:25.947310624Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 9 18:33:25.947523 env[1817]: time="2024-02-09T18:33:25.947510144Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 9 18:33:25.947747 env[1817]: time="2024-02-09T18:33:25.947731625Z" level=info msg="Loading containers: start." Feb 9 18:33:26.011000 audit[1845]: NETFILTER_CFG table=nat:5 family=2 entries=2 op=nft_register_chain pid=1845 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.011000 audit[1845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd9bb7090 a2=0 a3=1 items=0 ppid=1817 pid=1845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.011000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 9 18:33:26.013000 audit[1847]: NETFILTER_CFG table=filter:6 family=2 entries=2 op=nft_register_chain pid=1847 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.013000 audit[1847]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd426bba0 a2=0 a3=1 items=0 ppid=1817 pid=1847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.013000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 9 18:33:26.015000 audit[1849]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.015000 audit[1849]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffec706ee0 a2=0 a3=1 items=0 ppid=1817 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.015000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 18:33:26.016000 audit[1851]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.016000 audit[1851]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffccb22610 a2=0 a3=1 items=0 ppid=1817 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.016000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 18:33:26.018000 audit[1853]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.018000 audit[1853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcd721230 a2=0 a3=1 items=0 ppid=1817 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.018000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 9 18:33:26.020000 audit[1855]: NETFILTER_CFG table=filter:10 family=2 entries=1 op=nft_register_rule pid=1855 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.020000 audit[1855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdeaf0080 a2=0 a3=1 items=0 ppid=1817 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.020000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 9 18:33:26.063000 audit[1857]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_register_chain pid=1857 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.063000 audit[1857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffce1276e0 a2=0 a3=1 items=0 ppid=1817 pid=1857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.063000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 9 18:33:26.065000 audit[1859]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.065000 audit[1859]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff30339d0 a2=0 a3=1 items=0 ppid=1817 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.065000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 9 18:33:26.066000 audit[1861]: NETFILTER_CFG table=filter:13 family=2 entries=2 op=nft_register_chain pid=1861 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.066000 audit[1861]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffe9b4cab0 a2=0 a3=1 items=0 ppid=1817 pid=1861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.066000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:33:26.099000 audit[1865]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_unregister_rule pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.099000 audit[1865]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc27f4cd0 a2=0 a3=1 items=0 ppid=1817 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.099000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:33:26.100000 audit[1866]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.100000 audit[1866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd2807c40 a2=0 a3=1 items=0 ppid=1817 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.100000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:33:26.146979 kernel: Initializing XFRM netlink socket Feb 9 18:33:26.168452 env[1817]: time="2024-02-09T18:33:26.168411329Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 18:33:26.251000 audit[1874]: NETFILTER_CFG table=nat:16 family=2 entries=2 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.251000 audit[1874]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffcb5c6410 a2=0 a3=1 items=0 ppid=1817 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.251000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 9 18:33:26.260000 audit[1877]: NETFILTER_CFG table=nat:17 family=2 entries=1 op=nft_register_rule pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.260000 audit[1877]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffdec20010 a2=0 a3=1 items=0 ppid=1817 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.260000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 9 18:33:26.262000 audit[1880]: NETFILTER_CFG table=filter:18 family=2 entries=1 op=nft_register_rule pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.262000 audit[1880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffecc8c3c0 a2=0 a3=1 items=0 ppid=1817 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.262000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 9 18:33:26.265000 audit[1882]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.265000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffcdb817d0 a2=0 a3=1 items=0 ppid=1817 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.265000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 9 18:33:26.266000 audit[1884]: NETFILTER_CFG table=nat:20 family=2 entries=2 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.266000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=fffff3b86b30 a2=0 a3=1 items=0 ppid=1817 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.266000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 9 18:33:26.268000 audit[1886]: NETFILTER_CFG table=nat:21 family=2 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.268000 audit[1886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=fffff6b94a40 a2=0 a3=1 items=0 ppid=1817 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.268000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 9 18:33:26.270000 audit[1888]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.270000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffd7f83d40 a2=0 a3=1 items=0 ppid=1817 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.270000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 9 18:33:26.271000 audit[1890]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.271000 audit[1890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff30f2a40 a2=0 a3=1 items=0 ppid=1817 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.271000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 9 18:33:26.273000 audit[1892]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_register_rule pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.273000 audit[1892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffcf4bfa10 a2=0 a3=1 items=0 ppid=1817 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.273000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 9 18:33:26.275000 audit[1894]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.275000 audit[1894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffda3c2dd0 a2=0 a3=1 items=0 ppid=1817 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.275000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 9 18:33:26.277000 audit[1896]: NETFILTER_CFG table=filter:26 family=2 entries=1 op=nft_register_rule pid=1896 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.277000 audit[1896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffcfa536a0 a2=0 a3=1 items=0 ppid=1817 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.277000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 9 18:33:26.278641 systemd-networkd[1604]: docker0: Link UP Feb 9 18:33:26.310000 audit[1900]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_unregister_rule pid=1900 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.310000 audit[1900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd14ee060 a2=0 a3=1 items=0 ppid=1817 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.310000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:33:26.311000 audit[1901]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_rule pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:26.311000 audit[1901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffde3e23a0 a2=0 a3=1 items=0 ppid=1817 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:26.311000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 9 18:33:26.312860 env[1817]: time="2024-02-09T18:33:26.312837173Z" level=info msg="Loading containers: done." Feb 9 18:33:26.322512 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2884896249-merged.mount: Deactivated successfully. Feb 9 18:33:26.360652 env[1817]: time="2024-02-09T18:33:26.360611921Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 18:33:26.361024 env[1817]: time="2024-02-09T18:33:26.361006042Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 18:33:26.361199 env[1817]: time="2024-02-09T18:33:26.361183962Z" level=info msg="Daemon has completed initialization" Feb 9 18:33:26.416806 systemd[1]: Started docker.service. Feb 9 18:33:26.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:26.424475 env[1817]: time="2024-02-09T18:33:26.424419624Z" level=info msg="API listen on /run/docker.sock" Feb 9 18:33:26.440750 systemd[1]: Reloading. Feb 9 18:33:26.499108 /usr/lib/systemd/system-generators/torcx-generator[1951]: time="2024-02-09T18:33:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:33:26.499137 /usr/lib/systemd/system-generators/torcx-generator[1951]: time="2024-02-09T18:33:26Z" level=info msg="torcx already run" Feb 9 18:33:26.571033 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:33:26.571050 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:33:26.586809 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:33:26.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:26.665171 systemd[1]: Started kubelet.service. Feb 9 18:33:26.733839 kubelet[2013]: E0209 18:33:26.733783 2013 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:33:26.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:33:26.735832 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:33:26.736021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:33:30.466397 env[1440]: time="2024-02-09T18:33:30.466349905Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 9 18:33:31.418037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1425902877.mount: Deactivated successfully. Feb 9 18:33:33.665339 env[1440]: time="2024-02-09T18:33:33.665284780Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.679837 env[1440]: time="2024-02-09T18:33:33.679793841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.690325 env[1440]: time="2024-02-09T18:33:33.690282896Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.699446 env[1440]: time="2024-02-09T18:33:33.699404909Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:33.700231 env[1440]: time="2024-02-09T18:33:33.700201750Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 9 18:33:33.709491 env[1440]: time="2024-02-09T18:33:33.709456203Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 9 18:33:35.542996 env[1440]: time="2024-02-09T18:33:35.542933422Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.555337 env[1440]: time="2024-02-09T18:33:35.555282190Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.562331 env[1440]: time="2024-02-09T18:33:35.562288217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.570200 env[1440]: time="2024-02-09T18:33:35.570152728Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:35.570884 env[1440]: time="2024-02-09T18:33:35.570857450Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 9 18:33:35.580283 env[1440]: time="2024-02-09T18:33:35.580250007Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 9 18:33:36.835843 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 18:33:36.836044 systemd[1]: Stopped kubelet.service. Feb 9 18:33:36.849405 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 18:33:36.849528 kernel: audit: type=1130 audit(1707503616.835:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:36.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:36.837614 systemd[1]: Started kubelet.service. Feb 9 18:33:36.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:36.879652 kernel: audit: type=1131 audit(1707503616.835:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:36.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:36.899640 kernel: audit: type=1130 audit(1707503616.835:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:36.924206 kubelet[2042]: E0209 18:33:36.924155 2042 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:33:36.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:33:36.926649 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:33:36.926785 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:33:36.948970 kernel: audit: type=1131 audit(1707503616.926:185): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:33:37.830572 env[1440]: time="2024-02-09T18:33:37.830521195Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.842758 env[1440]: time="2024-02-09T18:33:37.842717239Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.850698 env[1440]: time="2024-02-09T18:33:37.850652949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.856803 env[1440]: time="2024-02-09T18:33:37.856756011Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:37.857604 env[1440]: time="2024-02-09T18:33:37.857574974Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 9 18:33:37.866258 env[1440]: time="2024-02-09T18:33:37.866223566Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 18:33:39.190200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3120123133.mount: Deactivated successfully. Feb 9 18:33:39.700242 env[1440]: time="2024-02-09T18:33:39.700192067Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:39.717397 env[1440]: time="2024-02-09T18:33:39.717349087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:39.727247 env[1440]: time="2024-02-09T18:33:39.727197681Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:39.734186 env[1440]: time="2024-02-09T18:33:39.734135986Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:39.734499 env[1440]: time="2024-02-09T18:33:39.734469867Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 9 18:33:39.743118 env[1440]: time="2024-02-09T18:33:39.743078177Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 18:33:40.484741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3615273743.mount: Deactivated successfully. Feb 9 18:33:40.587428 env[1440]: time="2024-02-09T18:33:40.587378016Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:40.643035 env[1440]: time="2024-02-09T18:33:40.642998884Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:40.654186 env[1440]: time="2024-02-09T18:33:40.654122922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:40.668742 env[1440]: time="2024-02-09T18:33:40.668698611Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:40.669532 env[1440]: time="2024-02-09T18:33:40.669502214Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 9 18:33:40.679030 env[1440]: time="2024-02-09T18:33:40.678995326Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 9 18:33:41.734066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035912276.mount: Deactivated successfully. Feb 9 18:33:45.556142 env[1440]: time="2024-02-09T18:33:45.556083206Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:45.582784 env[1440]: time="2024-02-09T18:33:45.582738685Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:45.603648 env[1440]: time="2024-02-09T18:33:45.603593386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:45.634180 env[1440]: time="2024-02-09T18:33:45.634128956Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:45.634872 env[1440]: time="2024-02-09T18:33:45.634844839Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 9 18:33:45.643931 env[1440]: time="2024-02-09T18:33:45.643887545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 9 18:33:46.724377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4040703622.mount: Deactivated successfully. Feb 9 18:33:47.085819 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 18:33:47.085999 systemd[1]: Stopped kubelet.service. Feb 9 18:33:47.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:47.087560 systemd[1]: Started kubelet.service. Feb 9 18:33:47.085000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:47.131959 kernel: audit: type=1130 audit(1707503627.085:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:47.132059 kernel: audit: type=1131 audit(1707503627.085:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:47.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:47.154308 kernel: audit: type=1130 audit(1707503627.086:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:47.189554 kubelet[2069]: E0209 18:33:47.189504 2069 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 18:33:47.191000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:33:47.191539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 18:33:47.191679 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 18:33:47.214053 env[1440]: time="2024-02-09T18:33:47.214000783Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:47.215121 kernel: audit: type=1131 audit(1707503627.191:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 9 18:33:47.226079 env[1440]: time="2024-02-09T18:33:47.226026297Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:47.237404 env[1440]: time="2024-02-09T18:33:47.237345289Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:47.243475 env[1440]: time="2024-02-09T18:33:47.243422666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:47.243970 env[1440]: time="2024-02-09T18:33:47.243927387Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 9 18:33:51.781787 systemd[1]: Stopped kubelet.service. Feb 9 18:33:51.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:51.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:51.823772 kernel: audit: type=1130 audit(1707503631.780:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:51.823877 kernel: audit: type=1131 audit(1707503631.780:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:51.833272 systemd[1]: Reloading. Feb 9 18:33:51.913035 /usr/lib/systemd/system-generators/torcx-generator[2159]: time="2024-02-09T18:33:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:33:51.914257 /usr/lib/systemd/system-generators/torcx-generator[2159]: time="2024-02-09T18:33:51Z" level=info msg="torcx already run" Feb 9 18:33:51.963460 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:33:51.963478 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:33:51.978640 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:33:52.069355 systemd[1]: Started kubelet.service. Feb 9 18:33:52.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:52.095984 kernel: audit: type=1130 audit(1707503632.069:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:33:52.136490 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:33:52.136850 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:33:52.137015 kubelet[2219]: I0209 18:33:52.136981 2219 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:33:52.138287 kubelet[2219]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:33:52.138368 kubelet[2219]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:33:53.454071 kubelet[2219]: I0209 18:33:53.454037 2219 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:33:53.454071 kubelet[2219]: I0209 18:33:53.454063 2219 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:33:53.454409 kubelet[2219]: I0209 18:33:53.454257 2219 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:33:53.460026 kubelet[2219]: I0209 18:33:53.459988 2219 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:33:53.461689 kubelet[2219]: W0209 18:33:53.461661 2219 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:33:53.461841 kubelet[2219]: E0209 18:33:53.461828 2219 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.462199 kubelet[2219]: I0209 18:33:53.462181 2219 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:33:53.462513 kubelet[2219]: I0209 18:33:53.462498 2219 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:33:53.462591 kubelet[2219]: I0209 18:33:53.462575 2219 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:33:53.462672 kubelet[2219]: I0209 18:33:53.462593 2219 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:33:53.462672 kubelet[2219]: I0209 18:33:53.462604 2219 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:33:53.462719 kubelet[2219]: I0209 18:33:53.462691 2219 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:33:53.465474 kubelet[2219]: I0209 18:33:53.465447 2219 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:33:53.465474 kubelet[2219]: I0209 18:33:53.465473 2219 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:33:53.465592 kubelet[2219]: I0209 18:33:53.465497 2219 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:33:53.465592 kubelet[2219]: I0209 18:33:53.465507 2219 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:33:53.466256 kubelet[2219]: I0209 18:33:53.466242 2219 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:33:53.466565 kubelet[2219]: W0209 18:33:53.466550 2219 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 18:33:53.467015 kubelet[2219]: I0209 18:33:53.466999 2219 server.go:1186] "Started kubelet" Feb 9 18:33:53.467215 kubelet[2219]: W0209 18:33:53.467184 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e8e52debc2&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.467299 kubelet[2219]: E0209 18:33:53.467288 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e8e52debc2&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.467000 audit[2219]: AVC avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:33:53.475093 kubelet[2219]: W0209 18:33:53.470128 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.475093 kubelet[2219]: E0209 18:33:53.470161 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.475093 kubelet[2219]: I0209 18:33:53.470559 2219 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:33:53.475093 kubelet[2219]: I0209 18:33:53.471103 2219 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:33:53.475228 kubelet[2219]: E0209 18:33:53.472063 2219 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-e8e52debc2.17b24582057c7429", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-e8e52debc2", UID:"ci-3510.3.2-a-e8e52debc2", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e8e52debc2"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 33, 53, 466946601, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 33, 53, 466946601, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.20.40:6443/api/v1/namespaces/default/events": dial tcp 10.200.20.40:6443: connect: connection refused'(may retry after sleeping) Feb 9 18:33:53.475228 kubelet[2219]: I0209 18:33:53.473026 2219 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 18:33:53.475228 kubelet[2219]: I0209 18:33:53.473060 2219 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 18:33:53.475228 kubelet[2219]: I0209 18:33:53.473138 2219 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:33:53.483058 kubelet[2219]: I0209 18:33:53.483038 2219 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:33:53.484313 kubelet[2219]: I0209 18:33:53.484296 2219 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:33:53.489699 kubelet[2219]: W0209 18:33:53.489661 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.489842 kubelet[2219]: E0209 18:33:53.489830 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.490002 kubelet[2219]: E0209 18:33:53.489986 2219 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e8e52debc2?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.490180 kubelet[2219]: E0209 18:33:53.490168 2219 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:33:53.490283 kubelet[2219]: E0209 18:33:53.490272 2219 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:33:53.467000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:33:53.500963 kernel: audit: type=1400 audit(1707503633.467:193): avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:33:53.501084 kernel: audit: type=1401 audit(1707503633.467:193): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:33:53.467000 audit[2219]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000796a80 a1=400109eb28 a2=4000796a50 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.532582 kernel: audit: type=1300 audit(1707503633.467:193): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000796a80 a1=400109eb28 a2=4000796a50 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.532715 kernel: audit: type=1327 audit(1707503633.467:193): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:33:53.467000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:33:53.472000 audit[2219]: AVC avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:33:53.587429 kernel: audit: type=1400 audit(1707503633.472:194): avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:33:53.587592 kernel: audit: type=1401 audit(1707503633.472:194): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:33:53.472000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:33:53.472000 audit[2219]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a33240 a1=400109eb40 a2=4000796b10 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.629929 kernel: audit: type=1300 audit(1707503633.472:194): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a33240 a1=400109eb40 a2=4000796b10 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.630069 kernel: audit: type=1327 audit(1707503633.472:194): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:33:53.472000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:33:53.480000 audit[2230]: NETFILTER_CFG table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.673973 kernel: audit: type=1325 audit(1707503633.480:195): table=mangle:29 family=2 entries=2 op=nft_register_chain pid=2230 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.674114 kernel: audit: type=1300 audit(1707503633.480:195): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffad9d9b0 a2=0 a3=1 items=0 ppid=2219 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.480000 audit[2230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffad9d9b0 a2=0 a3=1 items=0 ppid=2219 pid=2230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.675964 kubelet[2219]: I0209 18:33:53.675938 2219 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:53.676813 kubelet[2219]: E0209 18:33:53.676785 2219 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:53.677182 kubelet[2219]: I0209 18:33:53.677160 2219 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:33:53.677281 kubelet[2219]: I0209 18:33:53.677271 2219 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:33:53.677361 kubelet[2219]: I0209 18:33:53.677353 2219 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:33:53.691385 kubelet[2219]: E0209 18:33:53.691360 2219 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e8e52debc2?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 18:33:53.483000 audit[2231]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_chain pid=2231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.483000 audit[2231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffecd75350 a2=0 a3=1 items=0 ppid=2219 pid=2231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 18:33:53.485000 audit[2233]: NETFILTER_CFG table=filter:31 family=2 entries=2 op=nft_register_chain pid=2233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.485000 audit[2233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffee686300 a2=0 a3=1 items=0 ppid=2219 pid=2233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.485000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 18:33:53.486000 audit[2235]: NETFILTER_CFG table=filter:32 family=2 entries=2 op=nft_register_chain pid=2235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.486000 audit[2235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc2552500 a2=0 a3=1 items=0 ppid=2219 pid=2235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 9 18:33:53.710000 audit[2242]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.710000 audit[2242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffedc0fb60 a2=0 a3=1 items=0 ppid=2219 pid=2242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.710000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 9 18:33:53.711000 audit[2243]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2243 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.711000 audit[2243]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcb2a5970 a2=0 a3=1 items=0 ppid=2219 pid=2243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.711000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 18:33:53.714698 kubelet[2219]: I0209 18:33:53.714679 2219 policy_none.go:49] "None policy: Start" Feb 9 18:33:53.715509 kubelet[2219]: I0209 18:33:53.715495 2219 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:33:53.715603 kubelet[2219]: I0209 18:33:53.715594 2219 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:33:53.724943 kubelet[2219]: I0209 18:33:53.724925 2219 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:33:53.725000 audit[2219]: AVC avc: denied { mac_admin } for pid=2219 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:33:53.725000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:33:53.725000 audit[2219]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f68f90 a1=4000f59d70 a2=4000f68f60 a3=25 items=0 ppid=1 pid=2219 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.725000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:33:53.727144 kubelet[2219]: I0209 18:33:53.727130 2219 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 18:33:53.727349 kubelet[2219]: I0209 18:33:53.727338 2219 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:33:53.728130 kubelet[2219]: E0209 18:33:53.728116 2219 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-e8e52debc2\" not found" Feb 9 18:33:53.836000 audit[2247]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.836000 audit[2247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe3aecae0 a2=0 a3=1 items=0 ppid=2219 pid=2247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 18:33:53.876000 audit[2250]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.876000 audit[2250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffcb6751b0 a2=0 a3=1 items=0 ppid=2219 pid=2250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.876000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 18:33:53.877000 audit[2251]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=2251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.877000 audit[2251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcac39d50 a2=0 a3=1 items=0 ppid=2219 pid=2251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.877000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 18:33:53.879088 kubelet[2219]: I0209 18:33:53.879057 2219 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:53.879389 kubelet[2219]: E0209 18:33:53.879370 2219 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:53.878000 audit[2252]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_chain pid=2252 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.878000 audit[2252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6ca6a60 a2=0 a3=1 items=0 ppid=2219 pid=2252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 18:33:53.880000 audit[2254]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=2254 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.880000 audit[2254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe690e360 a2=0 a3=1 items=0 ppid=2219 pid=2254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.880000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 18:33:53.882000 audit[2256]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.882000 audit[2256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe4a22300 a2=0 a3=1 items=0 ppid=2219 pid=2256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 18:33:53.884000 audit[2258]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_rule pid=2258 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.884000 audit[2258]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffce0adca0 a2=0 a3=1 items=0 ppid=2219 pid=2258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 18:33:53.886000 audit[2260]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_rule pid=2260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.886000 audit[2260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc9707040 a2=0 a3=1 items=0 ppid=2219 pid=2260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.886000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 18:33:53.889000 audit[2262]: NETFILTER_CFG table=nat:43 family=2 entries=1 op=nft_register_rule pid=2262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.889000 audit[2262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=fffff61ba580 a2=0 a3=1 items=0 ppid=2219 pid=2262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.889000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 18:33:53.890468 kubelet[2219]: I0209 18:33:53.890452 2219 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:33:53.890000 audit[2263]: NETFILTER_CFG table=mangle:44 family=10 entries=2 op=nft_register_chain pid=2263 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.890000 audit[2263]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcd8a09b0 a2=0 a3=1 items=0 ppid=2219 pid=2263 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 9 18:33:53.890000 audit[2264]: NETFILTER_CFG table=mangle:45 family=2 entries=1 op=nft_register_chain pid=2264 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.890000 audit[2264]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc11e2ad0 a2=0 a3=1 items=0 ppid=2219 pid=2264 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.890000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 18:33:53.891000 audit[2265]: NETFILTER_CFG table=nat:46 family=10 entries=2 op=nft_register_chain pid=2265 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.891000 audit[2265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff96003d0 a2=0 a3=1 items=0 ppid=2219 pid=2265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.891000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 9 18:33:53.892000 audit[2266]: NETFILTER_CFG table=nat:47 family=2 entries=1 op=nft_register_chain pid=2266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.892000 audit[2266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf0e9420 a2=0 a3=1 items=0 ppid=2219 pid=2266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.892000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 18:33:53.893000 audit[2268]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_chain pid=2268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:33:53.893000 audit[2268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc344670 a2=0 a3=1 items=0 ppid=2219 pid=2268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 18:33:53.893000 audit[2269]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_rule pid=2269 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.893000 audit[2269]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd3ddf1b0 a2=0 a3=1 items=0 ppid=2219 pid=2269 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 9 18:33:53.894000 audit[2270]: NETFILTER_CFG table=filter:50 family=10 entries=2 op=nft_register_chain pid=2270 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.894000 audit[2270]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe87103c0 a2=0 a3=1 items=0 ppid=2219 pid=2270 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 9 18:33:53.896000 audit[2272]: NETFILTER_CFG table=filter:51 family=10 entries=1 op=nft_register_rule pid=2272 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.896000 audit[2272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe0766290 a2=0 a3=1 items=0 ppid=2219 pid=2272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 9 18:33:53.897000 audit[2273]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2273 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.897000 audit[2273]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe670a0f0 a2=0 a3=1 items=0 ppid=2219 pid=2273 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 9 18:33:53.898000 audit[2274]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_chain pid=2274 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.898000 audit[2274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffced4d250 a2=0 a3=1 items=0 ppid=2219 pid=2274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.898000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 9 18:33:53.900000 audit[2276]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=2276 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.900000 audit[2276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff95fb370 a2=0 a3=1 items=0 ppid=2219 pid=2276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 9 18:33:53.901000 audit[2278]: NETFILTER_CFG table=nat:55 family=10 entries=2 op=nft_register_chain pid=2278 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.901000 audit[2278]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff9bf2a30 a2=0 a3=1 items=0 ppid=2219 pid=2278 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.901000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 9 18:33:53.903000 audit[2280]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_rule pid=2280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.903000 audit[2280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc038cac0 a2=0 a3=1 items=0 ppid=2219 pid=2280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.903000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 9 18:33:53.905000 audit[2282]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_rule pid=2282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.905000 audit[2282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc6242810 a2=0 a3=1 items=0 ppid=2219 pid=2282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 9 18:33:53.939000 audit[2284]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_rule pid=2284 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.939000 audit[2284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=fffffaabd210 a2=0 a3=1 items=0 ppid=2219 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 9 18:33:53.940595 kubelet[2219]: I0209 18:33:53.940570 2219 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:33:53.940648 kubelet[2219]: I0209 18:33:53.940605 2219 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:33:53.940648 kubelet[2219]: I0209 18:33:53.940623 2219 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:33:53.940731 kubelet[2219]: E0209 18:33:53.940674 2219 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 18:33:53.941333 kubelet[2219]: W0209 18:33:53.941158 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.941333 kubelet[2219]: E0209 18:33:53.941206 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:53.941000 audit[2285]: NETFILTER_CFG table=mangle:59 family=10 entries=1 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.941000 audit[2285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc937a800 a2=0 a3=1 items=0 ppid=2219 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 9 18:33:53.942000 audit[2286]: NETFILTER_CFG table=nat:60 family=10 entries=1 op=nft_register_chain pid=2286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.942000 audit[2286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd51c2350 a2=0 a3=1 items=0 ppid=2219 pid=2286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 9 18:33:53.942000 audit[2287]: NETFILTER_CFG table=filter:61 family=10 entries=1 op=nft_register_chain pid=2287 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:33:53.942000 audit[2287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd891bd70 a2=0 a3=1 items=0 ppid=2219 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:33:53.942000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 9 18:33:54.041748 kubelet[2219]: I0209 18:33:54.041658 2219 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:33:54.045288 kubelet[2219]: I0209 18:33:54.045261 2219 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:33:54.046447 kubelet[2219]: I0209 18:33:54.046416 2219 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:33:54.054990 kubelet[2219]: I0209 18:33:54.054969 2219 status_manager.go:698] "Failed to get status for pod" podUID=1015204b68d3a78c3a1c07a1735ea5b4 pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" err="Get \"https://10.200.20.40:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-e8e52debc2\": dial tcp 10.200.20.40:6443: connect: connection refused" Feb 9 18:33:54.055147 kubelet[2219]: I0209 18:33:54.055124 2219 status_manager.go:698] "Failed to get status for pod" podUID=646067eb88f5bb35d7fb674d818cc90c pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" err="Get \"https://10.200.20.40:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-e8e52debc2\": dial tcp 10.200.20.40:6443: connect: connection refused" Feb 9 18:33:54.055290 kubelet[2219]: I0209 18:33:54.055273 2219 status_manager.go:698] "Failed to get status for pod" podUID=ad72c78f8b25f7b1333db4350b8c5ec5 pod="kube-system/kube-scheduler-ci-3510.3.2-a-e8e52debc2" err="Get \"https://10.200.20.40:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-e8e52debc2\": dial tcp 10.200.20.40:6443: connect: connection refused" Feb 9 18:33:54.088546 kubelet[2219]: I0209 18:33:54.088514 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1015204b68d3a78c3a1c07a1735ea5b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" (UID: \"1015204b68d3a78c3a1c07a1735ea5b4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.088642 kubelet[2219]: I0209 18:33:54.088632 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.088709 kubelet[2219]: I0209 18:33:54.088700 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.088871 kubelet[2219]: I0209 18:33:54.088859 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad72c78f8b25f7b1333db4350b8c5ec5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e8e52debc2\" (UID: \"ad72c78f8b25f7b1333db4350b8c5ec5\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.088978 kubelet[2219]: I0209 18:33:54.088943 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1015204b68d3a78c3a1c07a1735ea5b4-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" (UID: \"1015204b68d3a78c3a1c07a1735ea5b4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.089083 kubelet[2219]: I0209 18:33:54.089073 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.089168 kubelet[2219]: I0209 18:33:54.089160 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.089266 kubelet[2219]: I0209 18:33:54.089256 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.089364 kubelet[2219]: I0209 18:33:54.089355 2219 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1015204b68d3a78c3a1c07a1735ea5b4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" (UID: \"1015204b68d3a78c3a1c07a1735ea5b4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.092911 kubelet[2219]: E0209 18:33:54.092884 2219 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e8e52debc2?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.281787 kubelet[2219]: I0209 18:33:54.281762 2219 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.282287 kubelet[2219]: E0209 18:33:54.282271 2219 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:54.352556 env[1440]: time="2024-02-09T18:33:54.352517531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e8e52debc2,Uid:646067eb88f5bb35d7fb674d818cc90c,Namespace:kube-system,Attempt:0,}" Feb 9 18:33:54.353201 env[1440]: time="2024-02-09T18:33:54.353075292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e8e52debc2,Uid:1015204b68d3a78c3a1c07a1735ea5b4,Namespace:kube-system,Attempt:0,}" Feb 9 18:33:54.355594 env[1440]: time="2024-02-09T18:33:54.355565058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e8e52debc2,Uid:ad72c78f8b25f7b1333db4350b8c5ec5,Namespace:kube-system,Attempt:0,}" Feb 9 18:33:54.567738 kubelet[2219]: W0209 18:33:54.567685 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e8e52debc2&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.567738 kubelet[2219]: E0209 18:33:54.567739 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-e8e52debc2&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.746432 kubelet[2219]: W0209 18:33:54.745986 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.746432 kubelet[2219]: E0209 18:33:54.746027 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.845770 kubelet[2219]: W0209 18:33:54.845688 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.845770 kubelet[2219]: E0209 18:33:54.845743 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.888453 kubelet[2219]: W0209 18:33:54.888360 2219 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.888453 kubelet[2219]: E0209 18:33:54.888411 2219 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:54.893945 kubelet[2219]: E0209 18:33:54.893905 2219 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-e8e52debc2?timeout=10s": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:55.084470 kubelet[2219]: I0209 18:33:55.083989 2219 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:55.084738 kubelet[2219]: E0209 18:33:55.084720 2219 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:55.244507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2907978984.mount: Deactivated successfully. Feb 9 18:33:55.308611 env[1440]: time="2024-02-09T18:33:55.308569694Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.314792 env[1440]: time="2024-02-09T18:33:55.314764428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.427455 env[1440]: time="2024-02-09T18:33:55.427414643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.431575 env[1440]: time="2024-02-09T18:33:55.431546293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.450346 env[1440]: time="2024-02-09T18:33:55.450313695Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.461377 env[1440]: time="2024-02-09T18:33:55.461331800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.470120 env[1440]: time="2024-02-09T18:33:55.470080220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.474663 env[1440]: time="2024-02-09T18:33:55.474620470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.479655 env[1440]: time="2024-02-09T18:33:55.479616002Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.487833 kubelet[2219]: E0209 18:33:55.487799 2219 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Feb 9 18:33:55.489765 env[1440]: time="2024-02-09T18:33:55.489731545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.495327 env[1440]: time="2024-02-09T18:33:55.495293037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.508512 env[1440]: time="2024-02-09T18:33:55.507716385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:33:55.675729 env[1440]: time="2024-02-09T18:33:55.675079044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:33:55.675729 env[1440]: time="2024-02-09T18:33:55.675131524Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:33:55.675729 env[1440]: time="2024-02-09T18:33:55.675144764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:33:55.675910 env[1440]: time="2024-02-09T18:33:55.675759646Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bfb5b78887b0ae385b318fa7001b089ee924254fa726a2cd7f0645c090c301da pid=2295 runtime=io.containerd.runc.v2 Feb 9 18:33:55.693154 env[1440]: time="2024-02-09T18:33:55.690842480Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:33:55.693154 env[1440]: time="2024-02-09T18:33:55.690876440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:33:55.693154 env[1440]: time="2024-02-09T18:33:55.690886080Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:33:55.693154 env[1440]: time="2024-02-09T18:33:55.691004800Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ad2f844e1289d5210e4e73d31844fb057704909e0c3fc0b2b3ad252dae29c5d5 pid=2329 runtime=io.containerd.runc.v2 Feb 9 18:33:55.703895 env[1440]: time="2024-02-09T18:33:55.703807949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:33:55.703895 env[1440]: time="2024-02-09T18:33:55.703868029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:33:55.704128 env[1440]: time="2024-02-09T18:33:55.704078630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:33:55.704426 env[1440]: time="2024-02-09T18:33:55.704370911Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2adfa0960f2367beda7ac0680b31f8f5b4149531011bc40229a413a93f08cf79 pid=2318 runtime=io.containerd.runc.v2 Feb 9 18:33:55.760822 env[1440]: time="2024-02-09T18:33:55.760771758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-e8e52debc2,Uid:646067eb88f5bb35d7fb674d818cc90c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfb5b78887b0ae385b318fa7001b089ee924254fa726a2cd7f0645c090c301da\"" Feb 9 18:33:55.764371 env[1440]: time="2024-02-09T18:33:55.764336126Z" level=info msg="CreateContainer within sandbox \"bfb5b78887b0ae385b318fa7001b089ee924254fa726a2cd7f0645c090c301da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 18:33:55.769762 env[1440]: time="2024-02-09T18:33:55.768646816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-e8e52debc2,Uid:ad72c78f8b25f7b1333db4350b8c5ec5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad2f844e1289d5210e4e73d31844fb057704909e0c3fc0b2b3ad252dae29c5d5\"" Feb 9 18:33:55.772994 env[1440]: time="2024-02-09T18:33:55.772926746Z" level=info msg="CreateContainer within sandbox \"ad2f844e1289d5210e4e73d31844fb057704909e0c3fc0b2b3ad252dae29c5d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 18:33:55.779371 env[1440]: time="2024-02-09T18:33:55.779341600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-e8e52debc2,Uid:1015204b68d3a78c3a1c07a1735ea5b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"2adfa0960f2367beda7ac0680b31f8f5b4149531011bc40229a413a93f08cf79\"" Feb 9 18:33:55.782214 env[1440]: time="2024-02-09T18:33:55.782186967Z" level=info msg="CreateContainer within sandbox \"2adfa0960f2367beda7ac0680b31f8f5b4149531011bc40229a413a93f08cf79\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 18:33:55.878353 env[1440]: time="2024-02-09T18:33:55.878303824Z" level=info msg="CreateContainer within sandbox \"bfb5b78887b0ae385b318fa7001b089ee924254fa726a2cd7f0645c090c301da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f9ffe3798034d790073b6a3ed7ab79bc73fecab0fe17d2417674df4a185a3670\"" Feb 9 18:33:55.879195 env[1440]: time="2024-02-09T18:33:55.879170226Z" level=info msg="StartContainer for \"f9ffe3798034d790073b6a3ed7ab79bc73fecab0fe17d2417674df4a185a3670\"" Feb 9 18:33:55.913899 env[1440]: time="2024-02-09T18:33:55.913855825Z" level=info msg="CreateContainer within sandbox \"2adfa0960f2367beda7ac0680b31f8f5b4149531011bc40229a413a93f08cf79\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1301d467e9f10c280c9f5e63f21ab6be36b17069f8a82cf585cbc7fb63460fe6\"" Feb 9 18:33:55.914624 env[1440]: time="2024-02-09T18:33:55.914599867Z" level=info msg="StartContainer for \"1301d467e9f10c280c9f5e63f21ab6be36b17069f8a82cf585cbc7fb63460fe6\"" Feb 9 18:33:55.923433 env[1440]: time="2024-02-09T18:33:55.923394566Z" level=info msg="CreateContainer within sandbox \"ad2f844e1289d5210e4e73d31844fb057704909e0c3fc0b2b3ad252dae29c5d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0257a321038c388d85761b06afdc30fa272b3d0849b941d1e24b44c760728836\"" Feb 9 18:33:55.924004 env[1440]: time="2024-02-09T18:33:55.923981288Z" level=info msg="StartContainer for \"0257a321038c388d85761b06afdc30fa272b3d0849b941d1e24b44c760728836\"" Feb 9 18:33:55.981491 env[1440]: time="2024-02-09T18:33:55.980243415Z" level=info msg="StartContainer for \"f9ffe3798034d790073b6a3ed7ab79bc73fecab0fe17d2417674df4a185a3670\" returns successfully" Feb 9 18:33:56.034970 env[1440]: time="2024-02-09T18:33:56.032742972Z" level=info msg="StartContainer for \"1301d467e9f10c280c9f5e63f21ab6be36b17069f8a82cf585cbc7fb63460fe6\" returns successfully" Feb 9 18:33:56.047851 env[1440]: time="2024-02-09T18:33:56.047799445Z" level=info msg="StartContainer for \"0257a321038c388d85761b06afdc30fa272b3d0849b941d1e24b44c760728836\" returns successfully" Feb 9 18:33:56.686961 kubelet[2219]: I0209 18:33:56.686675 2219 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:58.514101 kubelet[2219]: E0209 18:33:58.514073 2219 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-e8e52debc2\" not found" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:58.599982 kubelet[2219]: I0209 18:33:58.599942 2219 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:59.003094 kubelet[2219]: E0209 18:33:59.003069 2219 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:59.003761 kubelet[2219]: E0209 18:33:59.003745 2219 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:33:59.473100 kubelet[2219]: I0209 18:33:59.473070 2219 apiserver.go:52] "Watching apiserver" Feb 9 18:33:59.485288 kubelet[2219]: I0209 18:33:59.485258 2219 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:33:59.519673 kubelet[2219]: I0209 18:33:59.519637 2219 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:34:01.184058 systemd[1]: Reloading. Feb 9 18:34:01.315662 /usr/lib/systemd/system-generators/torcx-generator[2545]: time="2024-02-09T18:34:01Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 18:34:01.317185 /usr/lib/systemd/system-generators/torcx-generator[2545]: time="2024-02-09T18:34:01Z" level=info msg="torcx already run" Feb 9 18:34:01.392998 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 18:34:01.393154 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 18:34:01.409099 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 18:34:01.518583 systemd[1]: Stopping kubelet.service... Feb 9 18:34:01.534616 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 18:34:01.535266 systemd[1]: Stopped kubelet.service. Feb 9 18:34:01.541784 kernel: kauditd_printk_skb: 101 callbacks suppressed Feb 9 18:34:01.541871 kernel: audit: type=1131 audit(1707503641.534:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:01.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:01.544079 systemd[1]: Started kubelet.service. Feb 9 18:34:01.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:01.589223 kernel: audit: type=1130 audit(1707503641.543:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:01.615015 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:01.615015 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:01.615015 kubelet[2612]: I0209 18:34:01.614422 2612 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 18:34:01.621987 kubelet[2612]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 18:34:01.621987 kubelet[2612]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 18:34:01.629971 kubelet[2612]: I0209 18:34:01.629922 2612 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 18:34:01.629971 kubelet[2612]: I0209 18:34:01.629968 2612 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 18:34:01.630324 kubelet[2612]: I0209 18:34:01.630246 2612 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 18:34:01.631939 kubelet[2612]: I0209 18:34:01.631896 2612 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 18:34:01.633425 kubelet[2612]: I0209 18:34:01.633396 2612 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 18:34:01.635578 kubelet[2612]: W0209 18:34:01.635550 2612 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 18:34:01.637486 kubelet[2612]: I0209 18:34:01.637467 2612 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 18:34:01.637883 kubelet[2612]: I0209 18:34:01.637863 2612 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 18:34:01.637958 kubelet[2612]: I0209 18:34:01.637932 2612 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 18:34:01.638326 kubelet[2612]: I0209 18:34:01.638294 2612 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 18:34:01.638326 kubelet[2612]: I0209 18:34:01.638323 2612 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 18:34:01.638405 kubelet[2612]: I0209 18:34:01.638355 2612 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:01.642644 kubelet[2612]: I0209 18:34:01.642617 2612 kubelet.go:398] "Attempting to sync node with API server" Feb 9 18:34:01.642644 kubelet[2612]: I0209 18:34:01.642643 2612 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 18:34:01.642745 kubelet[2612]: I0209 18:34:01.642670 2612 kubelet.go:297] "Adding apiserver pod source" Feb 9 18:34:01.642745 kubelet[2612]: I0209 18:34:01.642680 2612 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 18:34:01.647606 kubelet[2612]: I0209 18:34:01.647582 2612 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 18:34:01.647975 kubelet[2612]: I0209 18:34:01.647933 2612 server.go:1186] "Started kubelet" Feb 9 18:34:01.649164 kubelet[2612]: I0209 18:34:01.649138 2612 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 18:34:01.650087 kubelet[2612]: I0209 18:34:01.650053 2612 server.go:451] "Adding debug handlers to kubelet server" Feb 9 18:34:01.650000 audit[2612]: AVC avc: denied { mac_admin } for pid=2612 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:01.650000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:01.699877 kernel: audit: type=1400 audit(1707503641.650:231): avc: denied { mac_admin } for pid=2612 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:01.700038 kernel: audit: type=1401 audit(1707503641.650:231): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:01.700057 kubelet[2612]: I0209 18:34:01.691813 2612 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 9 18:34:01.700057 kubelet[2612]: I0209 18:34:01.691867 2612 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 9 18:34:01.700057 kubelet[2612]: I0209 18:34:01.691892 2612 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 18:34:01.700057 kubelet[2612]: I0209 18:34:01.694879 2612 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 18:34:01.700057 kubelet[2612]: I0209 18:34:01.695000 2612 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 18:34:01.650000 audit[2612]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cf3bc0 a1=4000d057e8 a2=4000cf3b90 a3=25 items=0 ppid=1 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:01.741978 kernel: audit: type=1300 audit(1707503641.650:231): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cf3bc0 a1=4000d057e8 a2=4000cf3b90 a3=25 items=0 ppid=1 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:01.742073 kubelet[2612]: E0209 18:34:01.715433 2612 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 18:34:01.742073 kubelet[2612]: E0209 18:34:01.715459 2612 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 18:34:01.650000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:01.771148 kernel: audit: type=1327 audit(1707503641.650:231): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:01.690000 audit[2612]: AVC avc: denied { mac_admin } for pid=2612 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:01.792288 kernel: audit: type=1400 audit(1707503641.690:232): avc: denied { mac_admin } for pid=2612 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:01.792377 kubelet[2612]: I0209 18:34:01.788129 2612 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 18:34:01.690000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:01.803506 kernel: audit: type=1401 audit(1707503641.690:232): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:01.690000 audit[2612]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000dada00 a1=4000dbe3a8 a2=4000e46870 a3=25 items=0 ppid=1 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:01.834427 kernel: audit: type=1300 audit(1707503641.690:232): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000dada00 a1=4000dbe3a8 a2=4000e46870 a3=25 items=0 ppid=1 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:01.834542 kubelet[2612]: I0209 18:34:01.813781 2612 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 18:34:01.834542 kubelet[2612]: I0209 18:34:01.813802 2612 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 18:34:01.834542 kubelet[2612]: I0209 18:34:01.813819 2612 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 18:34:01.834542 kubelet[2612]: E0209 18:34:01.813861 2612 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 18:34:01.690000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:01.867314 kernel: audit: type=1327 audit(1707503641.690:232): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:01.867411 kubelet[2612]: I0209 18:34:01.839288 2612 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:01.885318 kubelet[2612]: I0209 18:34:01.884770 2612 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:01.885318 kubelet[2612]: I0209 18:34:01.884866 2612 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:01.920162 kubelet[2612]: E0209 18:34:01.920129 2612 kubelet.go:2137] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.955162 2612 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.955190 2612 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.955212 2612 state_mem.go:36] "Initialized new in-memory state store" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.955352 2612 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.955367 2612 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.955373 2612 policy_none.go:49] "None policy: Start" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.956069 2612 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.956095 2612 state_mem.go:35] "Initializing new in-memory state store" Feb 9 18:34:01.957000 kubelet[2612]: I0209 18:34:01.956254 2612 state_mem.go:75] "Updated machine memory state" Feb 9 18:34:01.957762 kubelet[2612]: I0209 18:34:01.957743 2612 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 18:34:01.959000 audit[2612]: AVC avc: denied { mac_admin } for pid=2612 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:01.959000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 9 18:34:01.959000 audit[2612]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000e47b90 a1=40016d52f0 a2=4000e47b60 a3=25 items=0 ppid=1 pid=2612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:01.959000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 9 18:34:01.960941 kubelet[2612]: I0209 18:34:01.960919 2612 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 9 18:34:01.961439 kubelet[2612]: I0209 18:34:01.961418 2612 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 18:34:02.120778 kubelet[2612]: I0209 18:34:02.120741 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:02.121010 kubelet[2612]: I0209 18:34:02.120996 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:02.121181 kubelet[2612]: I0209 18:34:02.121167 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:02.203099 kubelet[2612]: I0209 18:34:02.203055 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1015204b68d3a78c3a1c07a1735ea5b4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" (UID: \"1015204b68d3a78c3a1c07a1735ea5b4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203099 kubelet[2612]: I0209 18:34:02.203100 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203268 kubelet[2612]: I0209 18:34:02.203122 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203268 kubelet[2612]: I0209 18:34:02.203142 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203268 kubelet[2612]: I0209 18:34:02.203168 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203268 kubelet[2612]: I0209 18:34:02.203190 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1015204b68d3a78c3a1c07a1735ea5b4-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" (UID: \"1015204b68d3a78c3a1c07a1735ea5b4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203268 kubelet[2612]: I0209 18:34:02.203208 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1015204b68d3a78c3a1c07a1735ea5b4-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" (UID: \"1015204b68d3a78c3a1c07a1735ea5b4\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203387 kubelet[2612]: I0209 18:34:02.203230 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/646067eb88f5bb35d7fb674d818cc90c-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" (UID: \"646067eb88f5bb35d7fb674d818cc90c\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.203387 kubelet[2612]: I0209 18:34:02.203251 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad72c78f8b25f7b1333db4350b8c5ec5-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-e8e52debc2\" (UID: \"ad72c78f8b25f7b1333db4350b8c5ec5\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:02.647514 kubelet[2612]: I0209 18:34:02.647475 2612 apiserver.go:52] "Watching apiserver" Feb 9 18:34:02.695538 kubelet[2612]: I0209 18:34:02.695482 2612 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 18:34:02.705753 kubelet[2612]: I0209 18:34:02.705689 2612 reconciler.go:41] "Reconciler: start to sync state" Feb 9 18:34:03.064262 kubelet[2612]: E0209 18:34:03.064160 2612 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-e8e52debc2\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:03.279428 kubelet[2612]: E0209 18:34:03.279392 2612 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-e8e52debc2\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:03.848823 kubelet[2612]: I0209 18:34:03.848786 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-e8e52debc2" podStartSLOduration=1.8487204959999999 pod.CreationTimestamp="2024-02-09 18:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:03.455615568 +0000 UTC m=+1.907212504" watchObservedRunningTime="2024-02-09 18:34:03.848720496 +0000 UTC m=+2.300317392" Feb 9 18:34:04.326713 kubelet[2612]: I0209 18:34:04.326612 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-e8e52debc2" podStartSLOduration=2.326557447 pod.CreationTimestamp="2024-02-09 18:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:03.849655898 +0000 UTC m=+2.301252834" watchObservedRunningTime="2024-02-09 18:34:04.326557447 +0000 UTC m=+2.778154383" Feb 9 18:34:05.310739 sudo[1801]: pam_unix(sudo:session): session closed for user root Feb 9 18:34:05.309000 audit[1801]: USER_END pid=1801 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:05.309000 audit[1801]: CRED_DISP pid=1801 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 9 18:34:05.392527 sshd[1797]: pam_unix(sshd:session): session closed for user core Feb 9 18:34:05.392000 audit[1797]: USER_END pid=1797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:34:05.393000 audit[1797]: CRED_DISP pid=1797 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:34:05.395703 systemd-logind[1421]: Session 9 logged out. Waiting for processes to exit. Feb 9 18:34:05.395900 systemd[1]: sshd@6-10.200.20.40:22-10.200.12.6:36740.service: Deactivated successfully. Feb 9 18:34:05.396702 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 18:34:05.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.200.20.40:22-10.200.12.6:36740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:34:05.397652 systemd-logind[1421]: Removed session 9. Feb 9 18:34:08.919343 kubelet[2612]: I0209 18:34:08.919293 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-e8e52debc2" podStartSLOduration=6.919215712 pod.CreationTimestamp="2024-02-09 18:34:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:04.329504813 +0000 UTC m=+2.781101749" watchObservedRunningTime="2024-02-09 18:34:08.919215712 +0000 UTC m=+7.370812648" Feb 9 18:34:15.572926 kubelet[2612]: I0209 18:34:15.572902 2612 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 18:34:15.573882 env[1440]: time="2024-02-09T18:34:15.573799763Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 18:34:15.574315 kubelet[2612]: I0209 18:34:15.574297 2612 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 18:34:15.626859 kubelet[2612]: I0209 18:34:15.626811 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:15.652352 kubelet[2612]: I0209 18:34:15.652308 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:15.674838 kubelet[2612]: I0209 18:34:15.674810 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0af192a2-b1f2-44fb-b558-fc29277bafd4-lib-modules\") pod \"kube-proxy-5rq5k\" (UID: \"0af192a2-b1f2-44fb-b558-fc29277bafd4\") " pod="kube-system/kube-proxy-5rq5k" Feb 9 18:34:15.675058 kubelet[2612]: I0209 18:34:15.675046 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32b2a077-f9f7-435f-8b37-ecdfc1f6e4af-var-lib-calico\") pod \"tigera-operator-cfc98749c-9mw7d\" (UID: \"32b2a077-f9f7-435f-8b37-ecdfc1f6e4af\") " pod="tigera-operator/tigera-operator-cfc98749c-9mw7d" Feb 9 18:34:15.675164 kubelet[2612]: I0209 18:34:15.675154 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0af192a2-b1f2-44fb-b558-fc29277bafd4-kube-proxy\") pod \"kube-proxy-5rq5k\" (UID: \"0af192a2-b1f2-44fb-b558-fc29277bafd4\") " pod="kube-system/kube-proxy-5rq5k" Feb 9 18:34:15.675262 kubelet[2612]: I0209 18:34:15.675252 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0af192a2-b1f2-44fb-b558-fc29277bafd4-xtables-lock\") pod \"kube-proxy-5rq5k\" (UID: \"0af192a2-b1f2-44fb-b558-fc29277bafd4\") " pod="kube-system/kube-proxy-5rq5k" Feb 9 18:34:15.675356 kubelet[2612]: I0209 18:34:15.675346 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7n8z\" (UniqueName: \"kubernetes.io/projected/0af192a2-b1f2-44fb-b558-fc29277bafd4-kube-api-access-n7n8z\") pod \"kube-proxy-5rq5k\" (UID: \"0af192a2-b1f2-44fb-b558-fc29277bafd4\") " pod="kube-system/kube-proxy-5rq5k" Feb 9 18:34:15.675463 kubelet[2612]: I0209 18:34:15.675453 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msx47\" (UniqueName: \"kubernetes.io/projected/32b2a077-f9f7-435f-8b37-ecdfc1f6e4af-kube-api-access-msx47\") pod \"tigera-operator-cfc98749c-9mw7d\" (UID: \"32b2a077-f9f7-435f-8b37-ecdfc1f6e4af\") " pod="tigera-operator/tigera-operator-cfc98749c-9mw7d" Feb 9 18:34:16.229764 env[1440]: time="2024-02-09T18:34:16.229712276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5rq5k,Uid:0af192a2-b1f2-44fb-b558-fc29277bafd4,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:16.260940 env[1440]: time="2024-02-09T18:34:16.260712759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-9mw7d,Uid:32b2a077-f9f7-435f-8b37-ecdfc1f6e4af,Namespace:tigera-operator,Attempt:0,}" Feb 9 18:34:16.307016 env[1440]: time="2024-02-09T18:34:16.306932382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:16.307186 env[1440]: time="2024-02-09T18:34:16.307164583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:16.307264 env[1440]: time="2024-02-09T18:34:16.307245303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:16.307535 env[1440]: time="2024-02-09T18:34:16.307505823Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1182aeeb38fb0683f841ffc23818383346fe8cb78dd37c4525d36decef696d26 pid=2719 runtime=io.containerd.runc.v2 Feb 9 18:34:16.331376 env[1440]: time="2024-02-09T18:34:16.331303416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:16.331517 env[1440]: time="2024-02-09T18:34:16.331382256Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:16.331517 env[1440]: time="2024-02-09T18:34:16.331407536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:16.331826 env[1440]: time="2024-02-09T18:34:16.331760217Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92d7a1de7bd26d7a4e5873f380f49ac043eb31cae8bdced838d3bc79b7ea9356 pid=2747 runtime=io.containerd.runc.v2 Feb 9 18:34:16.357185 env[1440]: time="2024-02-09T18:34:16.357135651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5rq5k,Uid:0af192a2-b1f2-44fb-b558-fc29277bafd4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1182aeeb38fb0683f841ffc23818383346fe8cb78dd37c4525d36decef696d26\"" Feb 9 18:34:16.362363 env[1440]: time="2024-02-09T18:34:16.362321859Z" level=info msg="CreateContainer within sandbox \"1182aeeb38fb0683f841ffc23818383346fe8cb78dd37c4525d36decef696d26\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 18:34:16.383188 env[1440]: time="2024-02-09T18:34:16.383138927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-9mw7d,Uid:32b2a077-f9f7-435f-8b37-ecdfc1f6e4af,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"92d7a1de7bd26d7a4e5873f380f49ac043eb31cae8bdced838d3bc79b7ea9356\"" Feb 9 18:34:16.386654 env[1440]: time="2024-02-09T18:34:16.386164851Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 9 18:34:16.429580 env[1440]: time="2024-02-09T18:34:16.429525991Z" level=info msg="CreateContainer within sandbox \"1182aeeb38fb0683f841ffc23818383346fe8cb78dd37c4525d36decef696d26\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5f6903b597e80df18db8bbd37dc0e259ca3b900d765ebbd13aeee2d05f7a8e33\"" Feb 9 18:34:16.430552 env[1440]: time="2024-02-09T18:34:16.430524632Z" level=info msg="StartContainer for \"5f6903b597e80df18db8bbd37dc0e259ca3b900d765ebbd13aeee2d05f7a8e33\"" Feb 9 18:34:16.490376 env[1440]: time="2024-02-09T18:34:16.488561712Z" level=info msg="StartContainer for \"5f6903b597e80df18db8bbd37dc0e259ca3b900d765ebbd13aeee2d05f7a8e33\" returns successfully" Feb 9 18:34:16.573000 audit[2852]: NETFILTER_CFG table=mangle:62 family=2 entries=1 op=nft_register_chain pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.579579 kernel: kauditd_printk_skb: 9 callbacks suppressed Feb 9 18:34:16.579663 kernel: audit: type=1325 audit(1707503656.573:239): table=mangle:62 family=2 entries=1 op=nft_register_chain pid=2852 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.573000 audit[2852]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca5d1940 a2=0 a3=ffffa4b676c0 items=0 ppid=2815 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.573000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:34:16.644076 kernel: audit: type=1300 audit(1707503656.573:239): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca5d1940 a2=0 a3=ffffa4b676c0 items=0 ppid=2815 pid=2852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.644177 kernel: audit: type=1327 audit(1707503656.573:239): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:34:16.578000 audit[2853]: NETFILTER_CFG table=nat:63 family=2 entries=1 op=nft_register_chain pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.659934 kernel: audit: type=1325 audit(1707503656.578:240): table=nat:63 family=2 entries=1 op=nft_register_chain pid=2853 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.660088 kernel: audit: type=1300 audit(1707503656.578:240): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4b77d20 a2=0 a3=ffffa076e6c0 items=0 ppid=2815 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.578000 audit[2853]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4b77d20 a2=0 a3=ffffa076e6c0 items=0 ppid=2815 pid=2853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:34:16.712458 kernel: audit: type=1327 audit(1707503656.578:240): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:34:16.712566 kernel: audit: type=1325 audit(1707503656.578:241): table=filter:64 family=2 entries=1 op=nft_register_chain pid=2854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.578000 audit[2854]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_chain pid=2854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.578000 audit[2854]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe547ea80 a2=0 a3=ffff91be86c0 items=0 ppid=2815 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.752361 kernel: audit: type=1300 audit(1707503656.578:241): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe547ea80 a2=0 a3=ffff91be86c0 items=0 ppid=2815 pid=2854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 18:34:16.767903 kernel: audit: type=1327 audit(1707503656.578:241): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 18:34:16.578000 audit[2855]: NETFILTER_CFG table=mangle:65 family=10 entries=1 op=nft_register_chain pid=2855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.783743 kernel: audit: type=1325 audit(1707503656.578:242): table=mangle:65 family=10 entries=1 op=nft_register_chain pid=2855 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.578000 audit[2855]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4c47510 a2=0 a3=ffff9cd8f6c0 items=0 ppid=2815 pid=2855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 9 18:34:16.578000 audit[2856]: NETFILTER_CFG table=nat:66 family=10 entries=1 op=nft_register_chain pid=2856 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.578000 audit[2856]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf23fcc0 a2=0 a3=ffff804d36c0 items=0 ppid=2815 pid=2856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 9 18:34:16.583000 audit[2857]: NETFILTER_CFG table=filter:67 family=10 entries=1 op=nft_register_chain pid=2857 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.583000 audit[2857]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8936040 a2=0 a3=ffffadf0b6c0 items=0 ppid=2815 pid=2857 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 9 18:34:16.706000 audit[2858]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=2858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.706000 audit[2858]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffeb3e1f90 a2=0 a3=ffffa4f216c0 items=0 ppid=2815 pid=2858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.706000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 18:34:16.708000 audit[2860]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=2860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.708000 audit[2860]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe0df1ee0 a2=0 a3=ffffb41646c0 items=0 ppid=2815 pid=2860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.708000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 9 18:34:16.712000 audit[2863]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_rule pid=2863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.712000 audit[2863]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffedad3550 a2=0 a3=ffff844e56c0 items=0 ppid=2815 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 9 18:34:16.714000 audit[2864]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_chain pid=2864 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.714000 audit[2864]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6f139e0 a2=0 a3=ffffbe57e6c0 items=0 ppid=2815 pid=2864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 18:34:16.716000 audit[2866]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=2866 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.716000 audit[2866]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea9961b0 a2=0 a3=ffffbe24e6c0 items=0 ppid=2815 pid=2866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 18:34:16.718000 audit[2867]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=2867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.718000 audit[2867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0c244c0 a2=0 a3=ffff8a7206c0 items=0 ppid=2815 pid=2867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 18:34:16.753000 audit[2869]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=2869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.753000 audit[2869]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcd55fdf0 a2=0 a3=ffffa8f586c0 items=0 ppid=2815 pid=2869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 18:34:16.785000 audit[2872]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_rule pid=2872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.785000 audit[2872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd22e82c0 a2=0 a3=ffffbe4936c0 items=0 ppid=2815 pid=2872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.785000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 9 18:34:16.786000 audit[2873]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_chain pid=2873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.786000 audit[2873]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd51a4de0 a2=0 a3=ffffaadbb6c0 items=0 ppid=2815 pid=2873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 18:34:16.789000 audit[2875]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=2875 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.789000 audit[2875]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe56023e0 a2=0 a3=ffff90b036c0 items=0 ppid=2815 pid=2875 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.789000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 18:34:16.790000 audit[2876]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_chain pid=2876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.790000 audit[2876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc0e42df0 a2=0 a3=ffff8ca506c0 items=0 ppid=2815 pid=2876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.790000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 18:34:16.792000 audit[2878]: NETFILTER_CFG table=filter:79 family=2 entries=1 op=nft_register_rule pid=2878 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.792000 audit[2878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff51b92b0 a2=0 a3=ffffb111a6c0 items=0 ppid=2815 pid=2878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:34:16.796000 audit[2881]: NETFILTER_CFG table=filter:80 family=2 entries=1 op=nft_register_rule pid=2881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.796000 audit[2881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffd416930 a2=0 a3=ffffa1e866c0 items=0 ppid=2815 pid=2881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:34:16.799000 audit[2884]: NETFILTER_CFG table=filter:81 family=2 entries=1 op=nft_register_rule pid=2884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.799000 audit[2884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe5b507e0 a2=0 a3=ffffa06db6c0 items=0 ppid=2815 pid=2884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.799000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 18:34:16.800000 audit[2885]: NETFILTER_CFG table=nat:82 family=2 entries=1 op=nft_register_chain pid=2885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.800000 audit[2885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc0030da0 a2=0 a3=ffffb9fa56c0 items=0 ppid=2815 pid=2885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 18:34:16.802000 audit[2887]: NETFILTER_CFG table=nat:83 family=2 entries=1 op=nft_register_rule pid=2887 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.802000 audit[2887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdadf1070 a2=0 a3=ffffa49666c0 items=0 ppid=2815 pid=2887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:34:16.804000 audit[2890]: NETFILTER_CFG table=nat:84 family=2 entries=1 op=nft_register_rule pid=2890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 9 18:34:16.804000 audit[2890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd1553c50 a2=0 a3=ffff938b86c0 items=0 ppid=2815 pid=2890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.804000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:34:16.830000 audit[2894]: NETFILTER_CFG table=filter:85 family=2 entries=6 op=nft_register_rule pid=2894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:16.830000 audit[2894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd0cc8480 a2=0 a3=ffffaccb46c0 items=0 ppid=2815 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:16.865000 audit[2894]: NETFILTER_CFG table=nat:86 family=2 entries=17 op=nft_register_chain pid=2894 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:16.865000 audit[2894]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd0cc8480 a2=0 a3=ffffaccb46c0 items=0 ppid=2815 pid=2894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:16.866000 audit[2898]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=2898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.866000 audit[2898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffebd3a750 a2=0 a3=ffff92b986c0 items=0 ppid=2815 pid=2898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 9 18:34:16.869000 audit[2900]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=2900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.869000 audit[2900]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd2362370 a2=0 a3=ffff82ce06c0 items=0 ppid=2815 pid=2900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 9 18:34:16.872000 audit[2903]: NETFILTER_CFG table=filter:89 family=10 entries=2 op=nft_register_chain pid=2903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.872000 audit[2903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc1dc05a0 a2=0 a3=ffffa00326c0 items=0 ppid=2815 pid=2903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 9 18:34:16.873000 audit[2904]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_chain pid=2904 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.873000 audit[2904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4731e70 a2=0 a3=ffffaa9976c0 items=0 ppid=2815 pid=2904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.873000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 9 18:34:16.875000 audit[2906]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_rule pid=2906 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.875000 audit[2906]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc76e0bf0 a2=0 a3=ffffb30ea6c0 items=0 ppid=2815 pid=2906 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.875000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 9 18:34:16.876000 audit[2907]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=2907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.876000 audit[2907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca4314f0 a2=0 a3=ffff8a0f56c0 items=0 ppid=2815 pid=2907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 9 18:34:16.878000 audit[2909]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=2909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.878000 audit[2909]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff00f5b70 a2=0 a3=ffffb613b6c0 items=0 ppid=2815 pid=2909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.878000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 9 18:34:16.881000 audit[2912]: NETFILTER_CFG table=filter:94 family=10 entries=2 op=nft_register_chain pid=2912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.881000 audit[2912]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd2f16310 a2=0 a3=ffff87fea6c0 items=0 ppid=2815 pid=2912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.881000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 9 18:34:16.882000 audit[2913]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_chain pid=2913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.882000 audit[2913]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc64b9d00 a2=0 a3=ffffbc59f6c0 items=0 ppid=2815 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 9 18:34:16.884000 audit[2915]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=2915 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.884000 audit[2915]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6fc9900 a2=0 a3=ffffa803e6c0 items=0 ppid=2815 pid=2915 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.884000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 9 18:34:16.885000 audit[2916]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_chain pid=2916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.885000 audit[2916]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd66e1180 a2=0 a3=ffffb70086c0 items=0 ppid=2815 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 9 18:34:16.887000 audit[2918]: NETFILTER_CFG table=filter:98 family=10 entries=1 op=nft_register_rule pid=2918 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.887000 audit[2918]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcc2b3140 a2=0 a3=ffffb73256c0 items=0 ppid=2815 pid=2918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.887000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 9 18:34:16.890000 audit[2921]: NETFILTER_CFG table=filter:99 family=10 entries=1 op=nft_register_rule pid=2921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.890000 audit[2921]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffee5e7740 a2=0 a3=ffffabb9a6c0 items=0 ppid=2815 pid=2921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 9 18:34:16.893000 audit[2924]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_rule pid=2924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.893000 audit[2924]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff86196e0 a2=0 a3=ffffad9c16c0 items=0 ppid=2815 pid=2924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.893000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 9 18:34:16.894000 audit[2925]: NETFILTER_CFG table=nat:101 family=10 entries=1 op=nft_register_chain pid=2925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.894000 audit[2925]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff3c61130 a2=0 a3=ffff88cfb6c0 items=0 ppid=2815 pid=2925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.894000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 9 18:34:16.896000 audit[2927]: NETFILTER_CFG table=nat:102 family=10 entries=2 op=nft_register_chain pid=2927 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.896000 audit[2927]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd5f66a40 a2=0 a3=ffff9ad0b6c0 items=0 ppid=2815 pid=2927 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.896000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:34:16.899000 audit[2930]: NETFILTER_CFG table=nat:103 family=10 entries=2 op=nft_register_chain pid=2930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 9 18:34:16.899000 audit[2930]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd881efd0 a2=0 a3=ffffb9d4a6c0 items=0 ppid=2815 pid=2930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 9 18:34:16.903000 audit[2934]: NETFILTER_CFG table=filter:104 family=10 entries=3 op=nft_register_rule pid=2934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 18:34:16.903000 audit[2934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffb9a8ba0 a2=0 a3=ffffad0b46c0 items=0 ppid=2815 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.903000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:16.904000 audit[2934]: NETFILTER_CFG table=nat:105 family=10 entries=10 op=nft_register_chain pid=2934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 9 18:34:16.904000 audit[2934]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=fffffb9a8ba0 a2=0 a3=ffffad0b46c0 items=0 ppid=2815 pid=2934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:16.904000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:16.923555 kubelet[2612]: I0209 18:34:16.923243 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5rq5k" podStartSLOduration=1.923209828 pod.CreationTimestamp="2024-02-09 18:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:16.923015268 +0000 UTC m=+15.374612204" watchObservedRunningTime="2024-02-09 18:34:16.923209828 +0000 UTC m=+15.374806764" Feb 9 18:34:16.978295 systemd[1]: run-containerd-runc-k8s.io-1182aeeb38fb0683f841ffc23818383346fe8cb78dd37c4525d36decef696d26-runc.3RSJLK.mount: Deactivated successfully. Feb 9 18:34:20.057262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697542334.mount: Deactivated successfully. Feb 9 18:34:20.812068 env[1440]: time="2024-02-09T18:34:20.812018383Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.833079 env[1440]: time="2024-02-09T18:34:20.833031929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.845518 env[1440]: time="2024-02-09T18:34:20.845463825Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.852844 env[1440]: time="2024-02-09T18:34:20.852789914Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:20.853553 env[1440]: time="2024-02-09T18:34:20.853524795Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 9 18:34:20.856167 env[1440]: time="2024-02-09T18:34:20.856110118Z" level=info msg="CreateContainer within sandbox \"92d7a1de7bd26d7a4e5873f380f49ac043eb31cae8bdced838d3bc79b7ea9356\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 9 18:34:20.917286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4121005457.mount: Deactivated successfully. Feb 9 18:34:20.956898 env[1440]: time="2024-02-09T18:34:20.956845845Z" level=info msg="CreateContainer within sandbox \"92d7a1de7bd26d7a4e5873f380f49ac043eb31cae8bdced838d3bc79b7ea9356\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"94258dacf8af5d3b5270710a725728833444d6c2208b0fb4d27b9ad2f5892cd2\"" Feb 9 18:34:20.959042 env[1440]: time="2024-02-09T18:34:20.957737487Z" level=info msg="StartContainer for \"94258dacf8af5d3b5270710a725728833444d6c2208b0fb4d27b9ad2f5892cd2\"" Feb 9 18:34:21.007365 env[1440]: time="2024-02-09T18:34:21.006719108Z" level=info msg="StartContainer for \"94258dacf8af5d3b5270710a725728833444d6c2208b0fb4d27b9ad2f5892cd2\" returns successfully" Feb 9 18:34:21.020170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount447250430.mount: Deactivated successfully. Feb 9 18:34:21.932081 kubelet[2612]: I0209 18:34:21.932042 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-9mw7d" podStartSLOduration=-9.223372029922768e+09 pod.CreationTimestamp="2024-02-09 18:34:15 +0000 UTC" firstStartedPulling="2024-02-09 18:34:16.384443289 +0000 UTC m=+14.836040225" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:21.93187689 +0000 UTC m=+20.383473826" watchObservedRunningTime="2024-02-09 18:34:21.932007531 +0000 UTC m=+20.383604467" Feb 9 18:34:23.346000 audit[2999]: NETFILTER_CFG table=filter:106 family=2 entries=13 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.355194 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 9 18:34:23.355255 kernel: audit: type=1325 audit(1707503663.346:283): table=filter:106 family=2 entries=13 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.346000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=fffffc596120 a2=0 a3=ffff905dd6c0 items=0 ppid=2815 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:23.425545 kernel: audit: type=1300 audit(1707503663.346:283): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=fffffc596120 a2=0 a3=ffff905dd6c0 items=0 ppid=2815 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:23.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:23.449348 kernel: audit: type=1327 audit(1707503663.346:283): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:23.372000 audit[2999]: NETFILTER_CFG table=nat:107 family=2 entries=20 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.474045 kernel: audit: type=1325 audit(1707503663.372:284): table=nat:107 family=2 entries=20 op=nft_register_rule pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.372000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffffc596120 a2=0 a3=ffff905dd6c0 items=0 ppid=2815 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:23.492594 kubelet[2612]: I0209 18:34:23.492558 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:23.518353 kernel: audit: type=1300 audit(1707503663.372:284): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffffc596120 a2=0 a3=ffff905dd6c0 items=0 ppid=2815 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:23.372000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:23.550405 kernel: audit: type=1327 audit(1707503663.372:284): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:23.628605 kubelet[2612]: I0209 18:34:23.628516 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vkfb\" (UniqueName: \"kubernetes.io/projected/25088b05-5aca-48bf-86a2-86ac430ea4dd-kube-api-access-8vkfb\") pod \"calico-typha-645fdf4995-m2797\" (UID: \"25088b05-5aca-48bf-86a2-86ac430ea4dd\") " pod="calico-system/calico-typha-645fdf4995-m2797" Feb 9 18:34:23.628772 kubelet[2612]: I0209 18:34:23.628758 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25088b05-5aca-48bf-86a2-86ac430ea4dd-tigera-ca-bundle\") pod \"calico-typha-645fdf4995-m2797\" (UID: \"25088b05-5aca-48bf-86a2-86ac430ea4dd\") " pod="calico-system/calico-typha-645fdf4995-m2797" Feb 9 18:34:23.628854 kubelet[2612]: I0209 18:34:23.628843 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/25088b05-5aca-48bf-86a2-86ac430ea4dd-typha-certs\") pod \"calico-typha-645fdf4995-m2797\" (UID: \"25088b05-5aca-48bf-86a2-86ac430ea4dd\") " pod="calico-system/calico-typha-645fdf4995-m2797" Feb 9 18:34:23.631337 kubelet[2612]: I0209 18:34:23.631312 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:23.646000 audit[3025]: NETFILTER_CFG table=filter:108 family=2 entries=14 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.670982 kernel: audit: type=1325 audit(1707503663.646:285): table=filter:108 family=2 entries=14 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.646000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd7975fb0 a2=0 a3=ffffa0d976c0 items=0 ppid=2815 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:23.646000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:23.729427 kubelet[2612]: I0209 18:34:23.729389 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-lib-calico\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.729687 kubelet[2612]: I0209 18:34:23.729672 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-net-dir\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.729783 kubelet[2612]: I0209 18:34:23.729772 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-policysync\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.729873 kubelet[2612]: I0209 18:34:23.729861 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-flexvol-driver-host\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.729989 kubelet[2612]: I0209 18:34:23.729977 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-xtables-lock\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.730061 kubelet[2612]: I0209 18:34:23.730052 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50d4cb18-5fb6-457f-b802-158f671cfe09-tigera-ca-bundle\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.730154 kubelet[2612]: I0209 18:34:23.730144 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/50d4cb18-5fb6-457f-b802-158f671cfe09-node-certs\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.730234 kubelet[2612]: I0209 18:34:23.730225 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-bin-dir\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.730326 kubelet[2612]: I0209 18:34:23.730316 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-log-dir\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.730411 kubelet[2612]: I0209 18:34:23.730401 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-lib-modules\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.730483 kubelet[2612]: I0209 18:34:23.730473 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwtvk\" (UniqueName: \"kubernetes.io/projected/50d4cb18-5fb6-457f-b802-158f671cfe09-kube-api-access-fwtvk\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.730554 kubelet[2612]: I0209 18:34:23.730545 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-run-calico\") pod \"calico-node-rzrmq\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " pod="calico-system/calico-node-rzrmq" Feb 9 18:34:23.738080 kernel: audit: type=1300 audit(1707503663.646:285): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd7975fb0 a2=0 a3=ffffa0d976c0 items=0 ppid=2815 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:23.738187 kernel: audit: type=1327 audit(1707503663.646:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:23.670000 audit[3025]: NETFILTER_CFG table=nat:109 family=2 entries=20 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.767278 kernel: audit: type=1325 audit(1707503663.670:286): table=nat:109 family=2 entries=20 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:23.670000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd7975fb0 a2=0 a3=ffffa0d976c0 items=0 ppid=2815 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:23.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:23.807162 kubelet[2612]: I0209 18:34:23.807120 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:23.807420 kubelet[2612]: E0209 18:34:23.807389 2612 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:23.839201 kubelet[2612]: E0209 18:34:23.839177 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.839374 kubelet[2612]: W0209 18:34:23.839359 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.839454 kubelet[2612]: E0209 18:34:23.839445 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.860310 env[1440]: time="2024-02-09T18:34:23.859894803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-645fdf4995-m2797,Uid:25088b05-5aca-48bf-86a2-86ac430ea4dd,Namespace:calico-system,Attempt:0,}" Feb 9 18:34:23.868335 kubelet[2612]: E0209 18:34:23.868304 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.868335 kubelet[2612]: W0209 18:34:23.868326 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.868335 kubelet[2612]: E0209 18:34:23.868346 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.868708 kubelet[2612]: E0209 18:34:23.868685 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.868708 kubelet[2612]: W0209 18:34:23.868698 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.868708 kubelet[2612]: E0209 18:34:23.868709 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.868854 kubelet[2612]: E0209 18:34:23.868836 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.868854 kubelet[2612]: W0209 18:34:23.868848 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.868932 kubelet[2612]: E0209 18:34:23.868858 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.869073 kubelet[2612]: E0209 18:34:23.869054 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.869073 kubelet[2612]: W0209 18:34:23.869068 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.869158 kubelet[2612]: E0209 18:34:23.869079 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.869340 kubelet[2612]: E0209 18:34:23.869324 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.869340 kubelet[2612]: W0209 18:34:23.869335 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.869340 kubelet[2612]: E0209 18:34:23.869345 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.869462 kubelet[2612]: E0209 18:34:23.869458 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.869509 kubelet[2612]: W0209 18:34:23.869465 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.869509 kubelet[2612]: E0209 18:34:23.869474 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.869662 kubelet[2612]: E0209 18:34:23.869638 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.869662 kubelet[2612]: W0209 18:34:23.869652 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.869662 kubelet[2612]: E0209 18:34:23.869662 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.869904 kubelet[2612]: E0209 18:34:23.869885 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.869904 kubelet[2612]: W0209 18:34:23.869897 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.869904 kubelet[2612]: E0209 18:34:23.869908 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.870084 kubelet[2612]: E0209 18:34:23.870067 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.870084 kubelet[2612]: W0209 18:34:23.870081 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.870153 kubelet[2612]: E0209 18:34:23.870091 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.870266 kubelet[2612]: E0209 18:34:23.870242 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.870266 kubelet[2612]: W0209 18:34:23.870255 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.870266 kubelet[2612]: E0209 18:34:23.870265 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.870396 kubelet[2612]: E0209 18:34:23.870382 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.870396 kubelet[2612]: W0209 18:34:23.870393 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.870466 kubelet[2612]: E0209 18:34:23.870405 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.870532 kubelet[2612]: E0209 18:34:23.870516 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.870532 kubelet[2612]: W0209 18:34:23.870529 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.870600 kubelet[2612]: E0209 18:34:23.870539 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.870725 kubelet[2612]: E0209 18:34:23.870706 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.870725 kubelet[2612]: W0209 18:34:23.870721 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.870787 kubelet[2612]: E0209 18:34:23.870731 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.870864 kubelet[2612]: E0209 18:34:23.870849 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.870864 kubelet[2612]: W0209 18:34:23.870861 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.870935 kubelet[2612]: E0209 18:34:23.870870 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.871078 kubelet[2612]: E0209 18:34:23.871062 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.871078 kubelet[2612]: W0209 18:34:23.871074 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.871148 kubelet[2612]: E0209 18:34:23.871085 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.935226 env[1440]: time="2024-02-09T18:34:23.930941408Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:23.935226 env[1440]: time="2024-02-09T18:34:23.930997288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:23.935226 env[1440]: time="2024-02-09T18:34:23.931007208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:23.935226 env[1440]: time="2024-02-09T18:34:23.931116008Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065 pid=3053 runtime=io.containerd.runc.v2 Feb 9 18:34:23.936075 kubelet[2612]: E0209 18:34:23.936046 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.936191 kubelet[2612]: W0209 18:34:23.936176 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.941020 kubelet[2612]: E0209 18:34:23.936264 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.942065 kubelet[2612]: I0209 18:34:23.942045 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/844edbd0-8ef3-4fe8-912a-b7cf2c34e24c-kubelet-dir\") pod \"csi-node-driver-7hwhv\" (UID: \"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c\") " pod="calico-system/csi-node-driver-7hwhv" Feb 9 18:34:23.944252 kubelet[2612]: E0209 18:34:23.944233 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.946532 kubelet[2612]: W0209 18:34:23.946504 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.946656 kubelet[2612]: E0209 18:34:23.946642 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.947149 kubelet[2612]: E0209 18:34:23.946939 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.947149 kubelet[2612]: W0209 18:34:23.946978 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.947149 kubelet[2612]: E0209 18:34:23.946993 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.947467 kubelet[2612]: E0209 18:34:23.947330 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.947467 kubelet[2612]: W0209 18:34:23.947342 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.947467 kubelet[2612]: E0209 18:34:23.947356 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.947467 kubelet[2612]: I0209 18:34:23.947394 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/844edbd0-8ef3-4fe8-912a-b7cf2c34e24c-socket-dir\") pod \"csi-node-driver-7hwhv\" (UID: \"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c\") " pod="calico-system/csi-node-driver-7hwhv" Feb 9 18:34:23.947750 kubelet[2612]: E0209 18:34:23.947738 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.947837 kubelet[2612]: W0209 18:34:23.947825 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.947916 kubelet[2612]: E0209 18:34:23.947906 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.948086 kubelet[2612]: I0209 18:34:23.948075 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/844edbd0-8ef3-4fe8-912a-b7cf2c34e24c-varrun\") pod \"csi-node-driver-7hwhv\" (UID: \"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c\") " pod="calico-system/csi-node-driver-7hwhv" Feb 9 18:34:23.948282 kubelet[2612]: E0209 18:34:23.948270 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.948376 kubelet[2612]: W0209 18:34:23.948363 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.948450 kubelet[2612]: E0209 18:34:23.948440 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.948673 kubelet[2612]: E0209 18:34:23.948663 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.948746 kubelet[2612]: W0209 18:34:23.948735 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.948822 kubelet[2612]: E0209 18:34:23.948813 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.949104 kubelet[2612]: E0209 18:34:23.949092 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.949183 kubelet[2612]: W0209 18:34:23.949171 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.949248 kubelet[2612]: E0209 18:34:23.949228 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.949466 kubelet[2612]: E0209 18:34:23.949456 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.949551 kubelet[2612]: W0209 18:34:23.949539 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.949628 kubelet[2612]: E0209 18:34:23.949619 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.949890 kubelet[2612]: E0209 18:34:23.949878 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.950001 kubelet[2612]: W0209 18:34:23.949989 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.950082 kubelet[2612]: E0209 18:34:23.950072 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.950360 kubelet[2612]: E0209 18:34:23.950349 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.950451 kubelet[2612]: W0209 18:34:23.950439 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.950543 kubelet[2612]: E0209 18:34:23.950534 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.950645 kubelet[2612]: I0209 18:34:23.950635 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/844edbd0-8ef3-4fe8-912a-b7cf2c34e24c-registration-dir\") pod \"csi-node-driver-7hwhv\" (UID: \"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c\") " pod="calico-system/csi-node-driver-7hwhv" Feb 9 18:34:23.951004 kubelet[2612]: E0209 18:34:23.950939 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.951102 kubelet[2612]: W0209 18:34:23.951089 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.951179 kubelet[2612]: E0209 18:34:23.951170 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.954455 kubelet[2612]: E0209 18:34:23.954439 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.954573 kubelet[2612]: W0209 18:34:23.954560 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.954653 kubelet[2612]: E0209 18:34:23.954644 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.955008 kubelet[2612]: E0209 18:34:23.954940 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.955092 kubelet[2612]: W0209 18:34:23.955080 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.955149 kubelet[2612]: E0209 18:34:23.955140 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.955230 kubelet[2612]: I0209 18:34:23.955221 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsx7n\" (UniqueName: \"kubernetes.io/projected/844edbd0-8ef3-4fe8-912a-b7cf2c34e24c-kube-api-access-dsx7n\") pod \"csi-node-driver-7hwhv\" (UID: \"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c\") " pod="calico-system/csi-node-driver-7hwhv" Feb 9 18:34:23.955509 kubelet[2612]: E0209 18:34:23.955496 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.955623 kubelet[2612]: W0209 18:34:23.955596 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.955683 kubelet[2612]: E0209 18:34:23.955673 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.956031 kubelet[2612]: E0209 18:34:23.956016 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:23.956128 kubelet[2612]: W0209 18:34:23.956115 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:23.956204 kubelet[2612]: E0209 18:34:23.956194 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:23.997133 env[1440]: time="2024-02-09T18:34:23.997090686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-645fdf4995-m2797,Uid:25088b05-5aca-48bf-86a2-86ac430ea4dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\"" Feb 9 18:34:24.001456 env[1440]: time="2024-02-09T18:34:24.000693970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 9 18:34:24.057518 kubelet[2612]: E0209 18:34:24.057374 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.057518 kubelet[2612]: W0209 18:34:24.057394 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.057518 kubelet[2612]: E0209 18:34:24.057416 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.058012 kubelet[2612]: E0209 18:34:24.057822 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.058012 kubelet[2612]: W0209 18:34:24.057835 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.058012 kubelet[2612]: E0209 18:34:24.057849 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.058334 kubelet[2612]: E0209 18:34:24.058185 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.058334 kubelet[2612]: W0209 18:34:24.058198 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.058334 kubelet[2612]: E0209 18:34:24.058214 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.058642 kubelet[2612]: E0209 18:34:24.058512 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.058642 kubelet[2612]: W0209 18:34:24.058523 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.058642 kubelet[2612]: E0209 18:34:24.058550 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.058976 kubelet[2612]: E0209 18:34:24.058811 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.058976 kubelet[2612]: W0209 18:34:24.058821 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.058976 kubelet[2612]: E0209 18:34:24.058836 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.059466 kubelet[2612]: E0209 18:34:24.059161 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.059466 kubelet[2612]: W0209 18:34:24.059171 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.059466 kubelet[2612]: E0209 18:34:24.059186 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.059466 kubelet[2612]: E0209 18:34:24.059371 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.059466 kubelet[2612]: W0209 18:34:24.059381 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.059466 kubelet[2612]: E0209 18:34:24.059395 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.059810 kubelet[2612]: E0209 18:34:24.059699 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.059810 kubelet[2612]: W0209 18:34:24.059711 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.059810 kubelet[2612]: E0209 18:34:24.059722 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.060400 kubelet[2612]: E0209 18:34:24.059985 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.060400 kubelet[2612]: W0209 18:34:24.059997 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.060400 kubelet[2612]: E0209 18:34:24.060087 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.060400 kubelet[2612]: E0209 18:34:24.060165 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.060400 kubelet[2612]: W0209 18:34:24.060171 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.060400 kubelet[2612]: E0209 18:34:24.060252 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.060400 kubelet[2612]: E0209 18:34:24.060323 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.060400 kubelet[2612]: W0209 18:34:24.060330 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.060821 kubelet[2612]: E0209 18:34:24.060667 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.060821 kubelet[2612]: E0209 18:34:24.060737 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.060821 kubelet[2612]: W0209 18:34:24.060745 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.060973 kubelet[2612]: E0209 18:34:24.060934 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.061162 kubelet[2612]: E0209 18:34:24.061151 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.061252 kubelet[2612]: W0209 18:34:24.061241 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.061401 kubelet[2612]: E0209 18:34:24.061391 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.061606 kubelet[2612]: E0209 18:34:24.061595 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.061689 kubelet[2612]: W0209 18:34:24.061678 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.061840 kubelet[2612]: E0209 18:34:24.061831 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.062035 kubelet[2612]: E0209 18:34:24.062010 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.062132 kubelet[2612]: W0209 18:34:24.062119 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.062296 kubelet[2612]: E0209 18:34:24.062285 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.062423 kubelet[2612]: E0209 18:34:24.062414 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.062492 kubelet[2612]: W0209 18:34:24.062482 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.062573 kubelet[2612]: E0209 18:34:24.062564 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.062802 kubelet[2612]: E0209 18:34:24.062782 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.062884 kubelet[2612]: W0209 18:34:24.062861 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.062994 kubelet[2612]: E0209 18:34:24.062984 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.063276 kubelet[2612]: E0209 18:34:24.063265 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.063372 kubelet[2612]: W0209 18:34:24.063360 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.063451 kubelet[2612]: E0209 18:34:24.063442 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.063705 kubelet[2612]: E0209 18:34:24.063694 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.063795 kubelet[2612]: W0209 18:34:24.063783 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.063875 kubelet[2612]: E0209 18:34:24.063866 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.064215 kubelet[2612]: E0209 18:34:24.064202 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.064322 kubelet[2612]: W0209 18:34:24.064309 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.064400 kubelet[2612]: E0209 18:34:24.064391 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.064665 kubelet[2612]: E0209 18:34:24.064654 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.064757 kubelet[2612]: W0209 18:34:24.064745 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.064837 kubelet[2612]: E0209 18:34:24.064828 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.065106 kubelet[2612]: E0209 18:34:24.065095 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.065194 kubelet[2612]: W0209 18:34:24.065182 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.065355 kubelet[2612]: E0209 18:34:24.065345 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.065596 kubelet[2612]: E0209 18:34:24.065575 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.065688 kubelet[2612]: W0209 18:34:24.065677 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.065769 kubelet[2612]: E0209 18:34:24.065760 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.066058 kubelet[2612]: E0209 18:34:24.066046 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.066146 kubelet[2612]: W0209 18:34:24.066135 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.066226 kubelet[2612]: E0209 18:34:24.066216 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.066491 kubelet[2612]: E0209 18:34:24.066481 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.066579 kubelet[2612]: W0209 18:34:24.066567 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.066680 kubelet[2612]: E0209 18:34:24.066671 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.066883 kubelet[2612]: E0209 18:34:24.066872 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.066993 kubelet[2612]: W0209 18:34:24.066980 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.067066 kubelet[2612]: E0209 18:34:24.067056 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.139057 kubelet[2612]: E0209 18:34:24.139033 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.139215 kubelet[2612]: W0209 18:34:24.139200 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.139279 kubelet[2612]: E0209 18:34:24.139269 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.160371 kubelet[2612]: E0209 18:34:24.160352 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.160499 kubelet[2612]: W0209 18:34:24.160485 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.160562 kubelet[2612]: E0209 18:34:24.160549 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.239767 env[1440]: time="2024-02-09T18:34:24.235399283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rzrmq,Uid:50d4cb18-5fb6-457f-b802-158f671cfe09,Namespace:calico-system,Attempt:0,}" Feb 9 18:34:24.261516 kubelet[2612]: E0209 18:34:24.261495 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.261656 kubelet[2612]: W0209 18:34:24.261643 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.261736 kubelet[2612]: E0209 18:34:24.261726 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.346032 kubelet[2612]: E0209 18:34:24.346007 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:24.346188 kubelet[2612]: W0209 18:34:24.346173 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:24.346288 kubelet[2612]: E0209 18:34:24.346277 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:24.346897 env[1440]: time="2024-02-09T18:34:24.346835772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:24.347104 env[1440]: time="2024-02-09T18:34:24.347078613Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:24.347204 env[1440]: time="2024-02-09T18:34:24.347183933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:24.347488 env[1440]: time="2024-02-09T18:34:24.347444893Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc pid=3147 runtime=io.containerd.runc.v2 Feb 9 18:34:24.401166 env[1440]: time="2024-02-09T18:34:24.399935794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rzrmq,Uid:50d4cb18-5fb6-457f-b802-158f671cfe09,Namespace:calico-system,Attempt:0,} returns sandbox id \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\"" Feb 9 18:34:24.759000 audit[3222]: NETFILTER_CFG table=filter:110 family=2 entries=14 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:24.759000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd8b646f0 a2=0 a3=ffff9f6526c0 items=0 ppid=2815 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:24.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:24.763000 audit[3222]: NETFILTER_CFG table=nat:111 family=2 entries=20 op=nft_register_rule pid=3222 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:24.763000 audit[3222]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd8b646f0 a2=0 a3=ffff9f6526c0 items=0 ppid=2815 pid=3222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:24.763000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:25.426171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2587512330.mount: Deactivated successfully. Feb 9 18:34:25.815224 kubelet[2612]: E0209 18:34:25.815124 2612 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:26.406947 env[1440]: time="2024-02-09T18:34:26.406905285Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:26.423959 env[1440]: time="2024-02-09T18:34:26.423905744Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:26.431662 env[1440]: time="2024-02-09T18:34:26.431626392Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:26.441598 env[1440]: time="2024-02-09T18:34:26.441553523Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:26.441910 env[1440]: time="2024-02-09T18:34:26.441863404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 9 18:34:26.444789 env[1440]: time="2024-02-09T18:34:26.443286445Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 9 18:34:26.460985 env[1440]: time="2024-02-09T18:34:26.459392983Z" level=info msg="CreateContainer within sandbox \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 18:34:26.510629 env[1440]: time="2024-02-09T18:34:26.510585440Z" level=info msg="CreateContainer within sandbox \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\"" Feb 9 18:34:26.511434 env[1440]: time="2024-02-09T18:34:26.511398281Z" level=info msg="StartContainer for \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\"" Feb 9 18:34:26.580907 env[1440]: time="2024-02-09T18:34:26.580862799Z" level=info msg="StartContainer for \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\" returns successfully" Feb 9 18:34:26.934476 env[1440]: time="2024-02-09T18:34:26.934434394Z" level=info msg="StopContainer for \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\" with timeout 300 (s)" Feb 9 18:34:26.934850 env[1440]: time="2024-02-09T18:34:26.934820554Z" level=info msg="Stop container \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\" with signal terminated" Feb 9 18:34:26.990420 kubelet[2612]: I0209 18:34:26.989816 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-645fdf4995-m2797" podStartSLOduration=-9.223372032865074e+09 pod.CreationTimestamp="2024-02-09 18:34:23 +0000 UTC" firstStartedPulling="2024-02-09 18:34:23.998375847 +0000 UTC m=+22.449972783" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:26.989494295 +0000 UTC m=+25.441091231" watchObservedRunningTime="2024-02-09 18:34:26.989702295 +0000 UTC m=+25.441299231" Feb 9 18:34:27.448357 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96-rootfs.mount: Deactivated successfully. Feb 9 18:34:27.722178 env[1440]: time="2024-02-09T18:34:27.722125698Z" level=info msg="shim disconnected" id=54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96 Feb 9 18:34:27.722178 env[1440]: time="2024-02-09T18:34:27.722172298Z" level=warning msg="cleaning up after shim disconnected" id=54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96 namespace=k8s.io Feb 9 18:34:27.722178 env[1440]: time="2024-02-09T18:34:27.722181498Z" level=info msg="cleaning up dead shim" Feb 9 18:34:27.729832 env[1440]: time="2024-02-09T18:34:27.729788426Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3284 runtime=io.containerd.runc.v2\n" Feb 9 18:34:27.736666 env[1440]: time="2024-02-09T18:34:27.736246153Z" level=info msg="StopContainer for \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\" returns successfully" Feb 9 18:34:27.736900 env[1440]: time="2024-02-09T18:34:27.736861914Z" level=info msg="StopPodSandbox for \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\"" Feb 9 18:34:27.736968 env[1440]: time="2024-02-09T18:34:27.736928434Z" level=info msg="Container to stop \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:34:27.740670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065-shm.mount: Deactivated successfully. Feb 9 18:34:27.778558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065-rootfs.mount: Deactivated successfully. Feb 9 18:34:27.800799 env[1440]: time="2024-02-09T18:34:27.799593583Z" level=info msg="shim disconnected" id=a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065 Feb 9 18:34:27.800799 env[1440]: time="2024-02-09T18:34:27.799644703Z" level=warning msg="cleaning up after shim disconnected" id=a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065 namespace=k8s.io Feb 9 18:34:27.800799 env[1440]: time="2024-02-09T18:34:27.799654503Z" level=info msg="cleaning up dead shim" Feb 9 18:34:27.807144 env[1440]: time="2024-02-09T18:34:27.807098591Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3316 runtime=io.containerd.runc.v2\n" Feb 9 18:34:27.807412 env[1440]: time="2024-02-09T18:34:27.807382191Z" level=info msg="TearDown network for sandbox \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" successfully" Feb 9 18:34:27.807447 env[1440]: time="2024-02-09T18:34:27.807409311Z" level=info msg="StopPodSandbox for \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" returns successfully" Feb 9 18:34:27.815910 kubelet[2612]: E0209 18:34:27.815865 2612 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:27.936494 kubelet[2612]: I0209 18:34:27.936390 2612 scope.go:115] "RemoveContainer" containerID="54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96" Feb 9 18:34:27.938371 env[1440]: time="2024-02-09T18:34:27.938337095Z" level=info msg="RemoveContainer for \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\"" Feb 9 18:34:27.957734 env[1440]: time="2024-02-09T18:34:27.957688676Z" level=info msg="RemoveContainer for \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\" returns successfully" Feb 9 18:34:27.958152 kubelet[2612]: I0209 18:34:27.958123 2612 scope.go:115] "RemoveContainer" containerID="54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96" Feb 9 18:34:27.958502 env[1440]: time="2024-02-09T18:34:27.958434037Z" level=error msg="ContainerStatus for \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\": not found" Feb 9 18:34:27.958623 kubelet[2612]: E0209 18:34:27.958603 2612 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\": not found" containerID="54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96" Feb 9 18:34:27.958681 kubelet[2612]: I0209 18:34:27.958638 2612 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96} err="failed to get container status \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\": rpc error: code = NotFound desc = an error occurred when try to find container \"54a3c5b5e13aa83b673e228581d47a16f3c0adb4f49859bbac3561d5b23a2d96\": not found" Feb 9 18:34:27.985274 kubelet[2612]: E0209 18:34:27.985049 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:27.985274 kubelet[2612]: W0209 18:34:27.985082 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:27.985274 kubelet[2612]: E0209 18:34:27.985099 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:27.985274 kubelet[2612]: I0209 18:34:27.985129 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vkfb\" (UniqueName: \"kubernetes.io/projected/25088b05-5aca-48bf-86a2-86ac430ea4dd-kube-api-access-8vkfb\") pod \"25088b05-5aca-48bf-86a2-86ac430ea4dd\" (UID: \"25088b05-5aca-48bf-86a2-86ac430ea4dd\") " Feb 9 18:34:27.985680 kubelet[2612]: E0209 18:34:27.985528 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:27.985680 kubelet[2612]: W0209 18:34:27.985541 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:27.985680 kubelet[2612]: E0209 18:34:27.985571 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:27.985680 kubelet[2612]: I0209 18:34:27.985594 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/25088b05-5aca-48bf-86a2-86ac430ea4dd-typha-certs\") pod \"25088b05-5aca-48bf-86a2-86ac430ea4dd\" (UID: \"25088b05-5aca-48bf-86a2-86ac430ea4dd\") " Feb 9 18:34:27.986701 kubelet[2612]: E0209 18:34:27.985869 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:27.986701 kubelet[2612]: W0209 18:34:27.985880 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:27.986701 kubelet[2612]: E0209 18:34:27.985901 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:27.986701 kubelet[2612]: I0209 18:34:27.985932 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25088b05-5aca-48bf-86a2-86ac430ea4dd-tigera-ca-bundle\") pod \"25088b05-5aca-48bf-86a2-86ac430ea4dd\" (UID: \"25088b05-5aca-48bf-86a2-86ac430ea4dd\") " Feb 9 18:34:27.986701 kubelet[2612]: E0209 18:34:27.986094 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:27.986701 kubelet[2612]: W0209 18:34:27.986105 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:27.986701 kubelet[2612]: E0209 18:34:27.986119 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:27.994532 kubelet[2612]: E0209 18:34:27.988494 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:27.994532 kubelet[2612]: W0209 18:34:27.988531 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:27.994532 kubelet[2612]: E0209 18:34:27.988554 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:27.994532 kubelet[2612]: I0209 18:34:27.989266 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25088b05-5aca-48bf-86a2-86ac430ea4dd-kube-api-access-8vkfb" (OuterVolumeSpecName: "kube-api-access-8vkfb") pod "25088b05-5aca-48bf-86a2-86ac430ea4dd" (UID: "25088b05-5aca-48bf-86a2-86ac430ea4dd"). InnerVolumeSpecName "kube-api-access-8vkfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:34:27.994532 kubelet[2612]: I0209 18:34:27.991560 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25088b05-5aca-48bf-86a2-86ac430ea4dd-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "25088b05-5aca-48bf-86a2-86ac430ea4dd" (UID: "25088b05-5aca-48bf-86a2-86ac430ea4dd"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:34:27.991778 systemd[1]: var-lib-kubelet-pods-25088b05\x2d5aca\x2d48bf\x2d86a2\x2d86ac430ea4dd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8vkfb.mount: Deactivated successfully. Feb 9 18:34:27.993690 systemd[1]: var-lib-kubelet-pods-25088b05\x2d5aca\x2d48bf\x2d86a2\x2d86ac430ea4dd-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Feb 9 18:34:27.996460 kubelet[2612]: E0209 18:34:27.996437 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:27.996460 kubelet[2612]: W0209 18:34:27.996457 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:27.996570 kubelet[2612]: E0209 18:34:27.996475 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:27.996597 kubelet[2612]: W0209 18:34:27.996572 2612 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/25088b05-5aca-48bf-86a2-86ac430ea4dd/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 18:34:27.996763 kubelet[2612]: I0209 18:34:27.996744 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25088b05-5aca-48bf-86a2-86ac430ea4dd-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "25088b05-5aca-48bf-86a2-86ac430ea4dd" (UID: "25088b05-5aca-48bf-86a2-86ac430ea4dd"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:34:28.087158 kubelet[2612]: I0209 18:34:28.087087 2612 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25088b05-5aca-48bf-86a2-86ac430ea4dd-tigera-ca-bundle\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:28.087158 kubelet[2612]: I0209 18:34:28.087127 2612 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-8vkfb\" (UniqueName: \"kubernetes.io/projected/25088b05-5aca-48bf-86a2-86ac430ea4dd-kube-api-access-8vkfb\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:28.087158 kubelet[2612]: I0209 18:34:28.087138 2612 reconciler_common.go:295] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/25088b05-5aca-48bf-86a2-86ac430ea4dd-typha-certs\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:28.258465 kubelet[2612]: I0209 18:34:28.258343 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:28.258465 kubelet[2612]: E0209 18:34:28.258413 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="25088b05-5aca-48bf-86a2-86ac430ea4dd" containerName="calico-typha" Feb 9 18:34:28.258465 kubelet[2612]: I0209 18:34:28.258448 2612 memory_manager.go:346] "RemoveStaleState removing state" podUID="25088b05-5aca-48bf-86a2-86ac430ea4dd" containerName="calico-typha" Feb 9 18:34:28.306803 kubelet[2612]: E0209 18:34:28.306764 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.306803 kubelet[2612]: W0209 18:34:28.306786 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.307398 kubelet[2612]: E0209 18:34:28.306818 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.307398 kubelet[2612]: E0209 18:34:28.306992 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.307398 kubelet[2612]: W0209 18:34:28.307001 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.307398 kubelet[2612]: E0209 18:34:28.307012 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.307398 kubelet[2612]: E0209 18:34:28.307161 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.307398 kubelet[2612]: W0209 18:34:28.307168 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.307398 kubelet[2612]: E0209 18:34:28.307178 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.307398 kubelet[2612]: E0209 18:34:28.307361 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.307398 kubelet[2612]: W0209 18:34:28.307370 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.307398 kubelet[2612]: E0209 18:34:28.307380 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.307626 kubelet[2612]: E0209 18:34:28.307505 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.307626 kubelet[2612]: W0209 18:34:28.307512 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.307626 kubelet[2612]: E0209 18:34:28.307522 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.307694 kubelet[2612]: E0209 18:34:28.307632 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.307694 kubelet[2612]: W0209 18:34:28.307647 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.307694 kubelet[2612]: E0209 18:34:28.307657 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.307852 kubelet[2612]: E0209 18:34:28.307830 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.307852 kubelet[2612]: W0209 18:34:28.307845 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.307933 kubelet[2612]: E0209 18:34:28.307856 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.308030 kubelet[2612]: E0209 18:34:28.308003 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.308030 kubelet[2612]: W0209 18:34:28.308015 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.308030 kubelet[2612]: E0209 18:34:28.308025 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.308189 kubelet[2612]: E0209 18:34:28.308171 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.308189 kubelet[2612]: W0209 18:34:28.308184 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.308256 kubelet[2612]: E0209 18:34:28.308196 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.316000 audit[3386]: NETFILTER_CFG table=filter:112 family=2 entries=14 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:28.316000 audit[3386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=fffff594d760 a2=0 a3=ffff94a416c0 items=0 ppid=2815 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:28.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:28.317000 audit[3386]: NETFILTER_CFG table=nat:113 family=2 entries=20 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:28.317000 audit[3386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=fffff594d760 a2=0 a3=ffff94a416c0 items=0 ppid=2815 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:28.317000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:28.388826 kernel: kauditd_printk_skb: 14 callbacks suppressed Feb 9 18:34:28.388941 kernel: audit: type=1325 audit(1707503668.366:291): table=filter:114 family=2 entries=14 op=nft_register_rule pid=3412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:28.366000 audit[3412]: NETFILTER_CFG table=filter:114 family=2 entries=14 op=nft_register_rule pid=3412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:28.366000 audit[3412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc8f3a0a0 a2=0 a3=ffffa79d06c0 items=0 ppid=2815 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:28.423044 kernel: audit: type=1300 audit(1707503668.366:291): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc8f3a0a0 a2=0 a3=ffffa79d06c0 items=0 ppid=2815 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:28.423138 kubelet[2612]: E0209 18:34:28.423083 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.423138 kubelet[2612]: W0209 18:34:28.423101 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.423138 kubelet[2612]: E0209 18:34:28.423125 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.423240 kubelet[2612]: I0209 18:34:28.423153 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnr4\" (UniqueName: \"kubernetes.io/projected/fd3e0dca-d29e-4584-ba0b-84f49664e80c-kube-api-access-cjnr4\") pod \"calico-typha-596488b6d-xxt2v\" (UID: \"fd3e0dca-d29e-4584-ba0b-84f49664e80c\") " pod="calico-system/calico-typha-596488b6d-xxt2v" Feb 9 18:34:28.424342 kubelet[2612]: E0209 18:34:28.423496 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424342 kubelet[2612]: W0209 18:34:28.423514 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424342 kubelet[2612]: E0209 18:34:28.423528 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.424342 kubelet[2612]: I0209 18:34:28.423554 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/fd3e0dca-d29e-4584-ba0b-84f49664e80c-typha-certs\") pod \"calico-typha-596488b6d-xxt2v\" (UID: \"fd3e0dca-d29e-4584-ba0b-84f49664e80c\") " pod="calico-system/calico-typha-596488b6d-xxt2v" Feb 9 18:34:28.424342 kubelet[2612]: E0209 18:34:28.423764 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424342 kubelet[2612]: W0209 18:34:28.423772 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424342 kubelet[2612]: E0209 18:34:28.423788 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.424342 kubelet[2612]: I0209 18:34:28.423806 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fd3e0dca-d29e-4584-ba0b-84f49664e80c-tigera-ca-bundle\") pod \"calico-typha-596488b6d-xxt2v\" (UID: \"fd3e0dca-d29e-4584-ba0b-84f49664e80c\") " pod="calico-system/calico-typha-596488b6d-xxt2v" Feb 9 18:34:28.424342 kubelet[2612]: E0209 18:34:28.423934 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424602 kubelet[2612]: W0209 18:34:28.423941 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424602 kubelet[2612]: E0209 18:34:28.423973 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.424602 kubelet[2612]: E0209 18:34:28.424092 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424602 kubelet[2612]: W0209 18:34:28.424099 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424602 kubelet[2612]: E0209 18:34:28.424109 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.424602 kubelet[2612]: E0209 18:34:28.424252 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424602 kubelet[2612]: W0209 18:34:28.424259 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424602 kubelet[2612]: E0209 18:34:28.424269 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.424602 kubelet[2612]: E0209 18:34:28.424372 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424602 kubelet[2612]: W0209 18:34:28.424378 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424811 kubelet[2612]: E0209 18:34:28.424388 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.424811 kubelet[2612]: E0209 18:34:28.424512 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424811 kubelet[2612]: W0209 18:34:28.424519 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424811 kubelet[2612]: E0209 18:34:28.424528 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.424811 kubelet[2612]: E0209 18:34:28.424626 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.424811 kubelet[2612]: W0209 18:34:28.424632 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.424811 kubelet[2612]: E0209 18:34:28.424640 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:28.448882 systemd[1]: var-lib-kubelet-pods-25088b05\x2d5aca\x2d48bf\x2d86a2\x2d86ac430ea4dd-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Feb 9 18:34:28.389000 audit[3412]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=3412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:28.459465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2155379060.mount: Deactivated successfully. Feb 9 18:34:28.466996 kernel: audit: type=1327 audit(1707503668.366:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:28.467071 kernel: audit: type=1325 audit(1707503668.389:292): table=nat:115 family=2 entries=20 op=nft_register_rule pid=3412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:28.389000 audit[3412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc8f3a0a0 a2=0 a3=ffffa79d06c0 items=0 ppid=2815 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:28.500361 kernel: audit: type=1300 audit(1707503668.389:292): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc8f3a0a0 a2=0 a3=ffffa79d06c0 items=0 ppid=2815 pid=3412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:28.500932 kernel: audit: type=1327 audit(1707503668.389:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:28.389000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:28.531024 kubelet[2612]: E0209 18:34:28.530807 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.531024 kubelet[2612]: W0209 18:34:28.530835 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.531024 kubelet[2612]: E0209 18:34:28.530858 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.531540 kubelet[2612]: E0209 18:34:28.531298 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.531540 kubelet[2612]: W0209 18:34:28.531320 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.531540 kubelet[2612]: E0209 18:34:28.531336 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.531939 kubelet[2612]: E0209 18:34:28.531705 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.531939 kubelet[2612]: W0209 18:34:28.531717 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.531939 kubelet[2612]: E0209 18:34:28.531732 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.532550 kubelet[2612]: E0209 18:34:28.532379 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.532550 kubelet[2612]: W0209 18:34:28.532392 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.532550 kubelet[2612]: E0209 18:34:28.532447 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.532932 kubelet[2612]: E0209 18:34:28.532736 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.532932 kubelet[2612]: W0209 18:34:28.532748 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.532932 kubelet[2612]: E0209 18:34:28.532825 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.533332 kubelet[2612]: E0209 18:34:28.533122 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.533332 kubelet[2612]: W0209 18:34:28.533133 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.533332 kubelet[2612]: E0209 18:34:28.533223 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.533845 kubelet[2612]: E0209 18:34:28.533607 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.533845 kubelet[2612]: W0209 18:34:28.533619 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.533845 kubelet[2612]: E0209 18:34:28.533707 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.534214 kubelet[2612]: E0209 18:34:28.534015 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.534214 kubelet[2612]: W0209 18:34:28.534027 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.534214 kubelet[2612]: E0209 18:34:28.534043 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.534542 kubelet[2612]: E0209 18:34:28.534379 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.534542 kubelet[2612]: W0209 18:34:28.534390 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.534542 kubelet[2612]: E0209 18:34:28.534405 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.535120 kubelet[2612]: E0209 18:34:28.535019 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.535120 kubelet[2612]: W0209 18:34:28.535032 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.535120 kubelet[2612]: E0209 18:34:28.535046 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.536192 kubelet[2612]: E0209 18:34:28.536156 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.536287 kubelet[2612]: W0209 18:34:28.536274 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.536403 kubelet[2612]: E0209 18:34:28.536394 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.536570 kubelet[2612]: E0209 18:34:28.536561 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.536644 kubelet[2612]: W0209 18:34:28.536633 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.536763 kubelet[2612]: E0209 18:34:28.536753 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.536914 kubelet[2612]: E0209 18:34:28.536905 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.536997 kubelet[2612]: W0209 18:34:28.536986 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.537133 kubelet[2612]: E0209 18:34:28.537125 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.537306 kubelet[2612]: E0209 18:34:28.537296 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.537700 kubelet[2612]: W0209 18:34:28.537685 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.537798 kubelet[2612]: E0209 18:34:28.537789 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.538228 kubelet[2612]: E0209 18:34:28.538215 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.538308 kubelet[2612]: W0209 18:34:28.538297 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.538374 kubelet[2612]: E0209 18:34:28.538365 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.538581 kubelet[2612]: E0209 18:34:28.538571 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.538647 kubelet[2612]: W0209 18:34:28.538637 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.538710 kubelet[2612]: E0209 18:34:28.538702 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.556358 kubelet[2612]: E0209 18:34:28.556338 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.556488 kubelet[2612]: W0209 18:34:28.556475 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.556552 kubelet[2612]: E0209 18:34:28.556543 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.559280 kubelet[2612]: E0209 18:34:28.559265 2612 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 9 18:34:28.563772 kubelet[2612]: W0209 18:34:28.559364 2612 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 9 18:34:28.563772 kubelet[2612]: E0209 18:34:28.559383 2612 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 9 18:34:28.564449 env[1440]: time="2024-02-09T18:34:28.564164369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-596488b6d-xxt2v,Uid:fd3e0dca-d29e-4584-ba0b-84f49664e80c,Namespace:calico-system,Attempt:0,}" Feb 9 18:34:28.669011 env[1440]: time="2024-02-09T18:34:28.668826641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:28.669011 env[1440]: time="2024-02-09T18:34:28.668863521Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:28.669011 env[1440]: time="2024-02-09T18:34:28.668873601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:28.669993 env[1440]: time="2024-02-09T18:34:28.669233322Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5d76b66b1f83f781ba6088aef488fb7fd79ad2e03d922a8b5eee58571923a3b pid=3449 runtime=io.containerd.runc.v2 Feb 9 18:34:28.747171 env[1440]: time="2024-02-09T18:34:28.747135325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-596488b6d-xxt2v,Uid:fd3e0dca-d29e-4584-ba0b-84f49664e80c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5d76b66b1f83f781ba6088aef488fb7fd79ad2e03d922a8b5eee58571923a3b\"" Feb 9 18:34:28.756637 env[1440]: time="2024-02-09T18:34:28.756600656Z" level=info msg="CreateContainer within sandbox \"c5d76b66b1f83f781ba6088aef488fb7fd79ad2e03d922a8b5eee58571923a3b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 9 18:34:28.873546 env[1440]: time="2024-02-09T18:34:28.873490341Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:28.900971 env[1440]: time="2024-02-09T18:34:28.900918491Z" level=info msg="CreateContainer within sandbox \"c5d76b66b1f83f781ba6088aef488fb7fd79ad2e03d922a8b5eee58571923a3b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"186b7e528fdb07dda954a03983b31a3cafda0330980c6eec4b1e2930959c064f\"" Feb 9 18:34:28.901662 env[1440]: time="2024-02-09T18:34:28.901633451Z" level=info msg="StartContainer for \"186b7e528fdb07dda954a03983b31a3cafda0330980c6eec4b1e2930959c064f\"" Feb 9 18:34:28.910135 env[1440]: time="2024-02-09T18:34:28.910102821Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:28.921793 env[1440]: time="2024-02-09T18:34:28.921458113Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:28.930853 env[1440]: time="2024-02-09T18:34:28.930812763Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:28.931113 env[1440]: time="2024-02-09T18:34:28.931082523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 9 18:34:28.933250 env[1440]: time="2024-02-09T18:34:28.933169725Z" level=info msg="CreateContainer within sandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 18:34:28.982642 env[1440]: time="2024-02-09T18:34:28.982593298Z" level=info msg="StartContainer for \"186b7e528fdb07dda954a03983b31a3cafda0330980c6eec4b1e2930959c064f\" returns successfully" Feb 9 18:34:29.035252 env[1440]: time="2024-02-09T18:34:29.035193594Z" level=info msg="CreateContainer within sandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c\"" Feb 9 18:34:29.035903 env[1440]: time="2024-02-09T18:34:29.035862315Z" level=info msg="StartContainer for \"2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c\"" Feb 9 18:34:29.122529 env[1440]: time="2024-02-09T18:34:29.122477886Z" level=info msg="StartContainer for \"2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c\" returns successfully" Feb 9 18:34:29.266993 env[1440]: time="2024-02-09T18:34:29.266864079Z" level=info msg="shim disconnected" id=2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c Feb 9 18:34:29.266993 env[1440]: time="2024-02-09T18:34:29.266915679Z" level=warning msg="cleaning up after shim disconnected" id=2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c namespace=k8s.io Feb 9 18:34:29.266993 env[1440]: time="2024-02-09T18:34:29.266926719Z" level=info msg="cleaning up dead shim" Feb 9 18:34:29.274921 env[1440]: time="2024-02-09T18:34:29.274882247Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3566 runtime=io.containerd.runc.v2\n" Feb 9 18:34:29.814896 kubelet[2612]: E0209 18:34:29.814863 2612 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:29.817816 kubelet[2612]: I0209 18:34:29.817786 2612 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=25088b05-5aca-48bf-86a2-86ac430ea4dd path="/var/lib/kubelet/pods/25088b05-5aca-48bf-86a2-86ac430ea4dd/volumes" Feb 9 18:34:29.960825 env[1440]: time="2024-02-09T18:34:29.960776370Z" level=info msg="StopPodSandbox for \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\"" Feb 9 18:34:29.961228 env[1440]: time="2024-02-09T18:34:29.960835650Z" level=info msg="Container to stop \"2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 18:34:29.964666 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc-shm.mount: Deactivated successfully. Feb 9 18:34:29.996906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc-rootfs.mount: Deactivated successfully. Feb 9 18:34:30.051582 env[1440]: time="2024-02-09T18:34:30.051520385Z" level=info msg="shim disconnected" id=583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc Feb 9 18:34:30.051582 env[1440]: time="2024-02-09T18:34:30.051577345Z" level=warning msg="cleaning up after shim disconnected" id=583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc namespace=k8s.io Feb 9 18:34:30.051777 env[1440]: time="2024-02-09T18:34:30.051588865Z" level=info msg="cleaning up dead shim" Feb 9 18:34:30.058448 env[1440]: time="2024-02-09T18:34:30.058406392Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3606 runtime=io.containerd.runc.v2\n" Feb 9 18:34:30.058731 env[1440]: time="2024-02-09T18:34:30.058704953Z" level=info msg="TearDown network for sandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" successfully" Feb 9 18:34:30.058731 env[1440]: time="2024-02-09T18:34:30.058730673Z" level=info msg="StopPodSandbox for \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" returns successfully" Feb 9 18:34:30.140986 kubelet[2612]: I0209 18:34:30.140561 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50d4cb18-5fb6-457f-b802-158f671cfe09-tigera-ca-bundle\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.140986 kubelet[2612]: I0209 18:34:30.140601 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-lib-calico\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.140986 kubelet[2612]: I0209 18:34:30.140625 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/50d4cb18-5fb6-457f-b802-158f671cfe09-node-certs\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.140986 kubelet[2612]: I0209 18:34:30.140643 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-bin-dir\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.140986 kubelet[2612]: I0209 18:34:30.140659 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-xtables-lock\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.140986 kubelet[2612]: I0209 18:34:30.140677 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-flexvol-driver-host\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.141266 kubelet[2612]: I0209 18:34:30.140695 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-run-calico\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.141266 kubelet[2612]: I0209 18:34:30.140718 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwtvk\" (UniqueName: \"kubernetes.io/projected/50d4cb18-5fb6-457f-b802-158f671cfe09-kube-api-access-fwtvk\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.141266 kubelet[2612]: I0209 18:34:30.140736 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-net-dir\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.141266 kubelet[2612]: I0209 18:34:30.140755 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-policysync\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.141266 kubelet[2612]: I0209 18:34:30.140776 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-log-dir\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.141266 kubelet[2612]: I0209 18:34:30.140793 2612 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-lib-modules\") pod \"50d4cb18-5fb6-457f-b802-158f671cfe09\" (UID: \"50d4cb18-5fb6-457f-b802-158f671cfe09\") " Feb 9 18:34:30.141440 kubelet[2612]: I0209 18:34:30.140860 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.141440 kubelet[2612]: W0209 18:34:30.141079 2612 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/50d4cb18-5fb6-457f-b802-158f671cfe09/volumes/kubernetes.io~configmap/tigera-ca-bundle: clearQuota called, but quotas disabled Feb 9 18:34:30.141440 kubelet[2612]: I0209 18:34:30.141262 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50d4cb18-5fb6-457f-b802-158f671cfe09-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 18:34:30.141440 kubelet[2612]: I0209 18:34:30.141290 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.142031 kubelet[2612]: I0209 18:34:30.141571 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.142031 kubelet[2612]: I0209 18:34:30.141607 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.142031 kubelet[2612]: I0209 18:34:30.141624 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.142031 kubelet[2612]: I0209 18:34:30.141643 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.142031 kubelet[2612]: I0209 18:34:30.141662 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.142206 kubelet[2612]: I0209 18:34:30.141881 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-policysync" (OuterVolumeSpecName: "policysync") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.142206 kubelet[2612]: I0209 18:34:30.141910 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 18:34:30.144232 kubelet[2612]: I0209 18:34:30.144204 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50d4cb18-5fb6-457f-b802-158f671cfe09-kube-api-access-fwtvk" (OuterVolumeSpecName: "kube-api-access-fwtvk") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "kube-api-access-fwtvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 18:34:30.145520 systemd[1]: var-lib-kubelet-pods-50d4cb18\x2d5fb6\x2d457f\x2db802\x2d158f671cfe09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfwtvk.mount: Deactivated successfully. Feb 9 18:34:30.147628 systemd[1]: var-lib-kubelet-pods-50d4cb18\x2d5fb6\x2d457f\x2db802\x2d158f671cfe09-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Feb 9 18:34:30.148463 kubelet[2612]: I0209 18:34:30.148440 2612 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50d4cb18-5fb6-457f-b802-158f671cfe09-node-certs" (OuterVolumeSpecName: "node-certs") pod "50d4cb18-5fb6-457f-b802-158f671cfe09" (UID: "50d4cb18-5fb6-457f-b802-158f671cfe09"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 18:34:30.241385 kubelet[2612]: I0209 18:34:30.241355 2612 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-lib-modules\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.241560 kubelet[2612]: I0209 18:34:30.241551 2612 reconciler_common.go:295] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50d4cb18-5fb6-457f-b802-158f671cfe09-tigera-ca-bundle\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.241630 kubelet[2612]: I0209 18:34:30.241622 2612 reconciler_common.go:295] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-lib-calico\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.241696 kubelet[2612]: I0209 18:34:30.241688 2612 reconciler_common.go:295] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/50d4cb18-5fb6-457f-b802-158f671cfe09-node-certs\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.241763 kubelet[2612]: I0209 18:34:30.241755 2612 reconciler_common.go:295] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-bin-dir\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.241837 kubelet[2612]: I0209 18:34:30.241829 2612 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-xtables-lock\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.241905 kubelet[2612]: I0209 18:34:30.241897 2612 reconciler_common.go:295] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-flexvol-driver-host\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.241996 kubelet[2612]: I0209 18:34:30.241987 2612 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-fwtvk\" (UniqueName: \"kubernetes.io/projected/50d4cb18-5fb6-457f-b802-158f671cfe09-kube-api-access-fwtvk\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.242078 kubelet[2612]: I0209 18:34:30.242068 2612 reconciler_common.go:295] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-var-run-calico\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.242147 kubelet[2612]: I0209 18:34:30.242139 2612 reconciler_common.go:295] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-policysync\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.242212 kubelet[2612]: I0209 18:34:30.242204 2612 reconciler_common.go:295] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-net-dir\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.242278 kubelet[2612]: I0209 18:34:30.242270 2612 reconciler_common.go:295] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/50d4cb18-5fb6-457f-b802-158f671cfe09-cni-log-dir\") on node \"ci-3510.3.2-a-e8e52debc2\" DevicePath \"\"" Feb 9 18:34:30.969821 kubelet[2612]: I0209 18:34:30.969796 2612 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:34:30.972760 kubelet[2612]: I0209 18:34:30.972743 2612 scope.go:115] "RemoveContainer" containerID="2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c" Feb 9 18:34:30.975910 env[1440]: time="2024-02-09T18:34:30.975868502Z" level=info msg="RemoveContainer for \"2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c\"" Feb 9 18:34:30.982555 kubelet[2612]: I0209 18:34:30.982423 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-596488b6d-xxt2v" podStartSLOduration=6.982374829 pod.CreationTimestamp="2024-02-09 18:34:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:30.000757613 +0000 UTC m=+28.452354549" watchObservedRunningTime="2024-02-09 18:34:30.982374829 +0000 UTC m=+29.433971765" Feb 9 18:34:30.995058 env[1440]: time="2024-02-09T18:34:30.994825522Z" level=info msg="RemoveContainer for \"2e145d94d8030cb233233a3c6058322e3efc0b327df89aa6c46d813ca706465c\" returns successfully" Feb 9 18:34:31.003795 kubelet[2612]: I0209 18:34:31.003765 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:31.003990 kubelet[2612]: E0209 18:34:31.003978 2612 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="50d4cb18-5fb6-457f-b802-158f671cfe09" containerName="flexvol-driver" Feb 9 18:34:31.004081 kubelet[2612]: I0209 18:34:31.004071 2612 memory_manager.go:346] "RemoveStaleState removing state" podUID="50d4cb18-5fb6-457f-b802-158f671cfe09" containerName="flexvol-driver" Feb 9 18:34:31.046752 kubelet[2612]: I0209 18:34:31.046338 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-cni-log-dir\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.046887 kubelet[2612]: I0209 18:34:31.046800 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-xtables-lock\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.046887 kubelet[2612]: I0209 18:34:31.046845 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-policysync\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.046887 kubelet[2612]: I0209 18:34:31.046870 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-var-run-calico\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.046999 kubelet[2612]: I0209 18:34:31.046926 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-var-lib-calico\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.047027 kubelet[2612]: I0209 18:34:31.047005 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-flexvol-driver-host\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.047316 kubelet[2612]: I0209 18:34:31.047044 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-cni-bin-dir\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.047350 kubelet[2612]: I0209 18:34:31.047334 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-cni-net-dir\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.047413 kubelet[2612]: I0209 18:34:31.047366 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw8mr\" (UniqueName: \"kubernetes.io/projected/f9b961a9-a485-4599-952a-55c55b71897f-kube-api-access-zw8mr\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.047442 kubelet[2612]: I0209 18:34:31.047438 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9b961a9-a485-4599-952a-55c55b71897f-tigera-ca-bundle\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.047484 kubelet[2612]: I0209 18:34:31.047471 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f9b961a9-a485-4599-952a-55c55b71897f-node-certs\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.047514 kubelet[2612]: I0209 18:34:31.047508 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9b961a9-a485-4599-952a-55c55b71897f-lib-modules\") pod \"calico-node-xzz92\" (UID: \"f9b961a9-a485-4599-952a-55c55b71897f\") " pod="calico-system/calico-node-xzz92" Feb 9 18:34:31.307863 env[1440]: time="2024-02-09T18:34:31.307431039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzz92,Uid:f9b961a9-a485-4599-952a-55c55b71897f,Namespace:calico-system,Attempt:0,}" Feb 9 18:34:31.373078 env[1440]: time="2024-02-09T18:34:31.372993346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:31.373219 env[1440]: time="2024-02-09T18:34:31.373081946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:31.373219 env[1440]: time="2024-02-09T18:34:31.373125146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:31.373445 env[1440]: time="2024-02-09T18:34:31.373405427Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c pid=3629 runtime=io.containerd.runc.v2 Feb 9 18:34:31.418908 env[1440]: time="2024-02-09T18:34:31.418858913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xzz92,Uid:f9b961a9-a485-4599-952a-55c55b71897f,Namespace:calico-system,Attempt:0,} returns sandbox id \"382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c\"" Feb 9 18:34:31.423691 env[1440]: time="2024-02-09T18:34:31.423657198Z" level=info msg="CreateContainer within sandbox \"382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 9 18:34:31.506798 env[1440]: time="2024-02-09T18:34:31.506725322Z" level=info msg="CreateContainer within sandbox \"382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7bdcb968fde59cc0d44ee6ffad490e1b7fcfeb0857d1cb69c7350e8791baff55\"" Feb 9 18:34:31.507485 env[1440]: time="2024-02-09T18:34:31.507461843Z" level=info msg="StartContainer for \"7bdcb968fde59cc0d44ee6ffad490e1b7fcfeb0857d1cb69c7350e8791baff55\"" Feb 9 18:34:31.575884 env[1440]: time="2024-02-09T18:34:31.575731312Z" level=info msg="StartContainer for \"7bdcb968fde59cc0d44ee6ffad490e1b7fcfeb0857d1cb69c7350e8791baff55\" returns successfully" Feb 9 18:34:31.626467 env[1440]: time="2024-02-09T18:34:31.626423324Z" level=info msg="shim disconnected" id=7bdcb968fde59cc0d44ee6ffad490e1b7fcfeb0857d1cb69c7350e8791baff55 Feb 9 18:34:31.626770 env[1440]: time="2024-02-09T18:34:31.626741124Z" level=warning msg="cleaning up after shim disconnected" id=7bdcb968fde59cc0d44ee6ffad490e1b7fcfeb0857d1cb69c7350e8791baff55 namespace=k8s.io Feb 9 18:34:31.626863 env[1440]: time="2024-02-09T18:34:31.626849484Z" level=info msg="cleaning up dead shim" Feb 9 18:34:31.636106 env[1440]: time="2024-02-09T18:34:31.636076093Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" Feb 9 18:34:31.815749 kubelet[2612]: E0209 18:34:31.814559 2612 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:31.816595 env[1440]: time="2024-02-09T18:34:31.816441437Z" level=info msg="StopPodSandbox for \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\"" Feb 9 18:34:31.816595 env[1440]: time="2024-02-09T18:34:31.816546717Z" level=info msg="TearDown network for sandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" successfully" Feb 9 18:34:31.816595 env[1440]: time="2024-02-09T18:34:31.816579797Z" level=info msg="StopPodSandbox for \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" returns successfully" Feb 9 18:34:31.818059 kubelet[2612]: I0209 18:34:31.818031 2612 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=50d4cb18-5fb6-457f-b802-158f671cfe09 path="/var/lib/kubelet/pods/50d4cb18-5fb6-457f-b802-158f671cfe09/volumes" Feb 9 18:34:31.984980 env[1440]: time="2024-02-09T18:34:31.980415803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 9 18:34:33.348396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3117979727.mount: Deactivated successfully. Feb 9 18:34:33.730702 kubelet[2612]: I0209 18:34:33.729844 2612 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:34:33.781000 audit[3758]: NETFILTER_CFG table=filter:116 family=2 entries=13 op=nft_register_rule pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:33.781000 audit[3758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd12eb420 a2=0 a3=ffffb1ab86c0 items=0 ppid=2815 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:33.814395 kubelet[2612]: E0209 18:34:33.814371 2612 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:33.831482 kernel: audit: type=1325 audit(1707503673.781:293): table=filter:116 family=2 entries=13 op=nft_register_rule pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:33.831643 kernel: audit: type=1300 audit(1707503673.781:293): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd12eb420 a2=0 a3=ffffb1ab86c0 items=0 ppid=2815 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:33.781000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:33.847137 kernel: audit: type=1327 audit(1707503673.781:293): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:33.781000 audit[3758]: NETFILTER_CFG table=nat:117 family=2 entries=27 op=nft_register_chain pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:33.863300 kernel: audit: type=1325 audit(1707503673.781:294): table=nat:117 family=2 entries=27 op=nft_register_chain pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:33.863419 kernel: audit: type=1300 audit(1707503673.781:294): arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffd12eb420 a2=0 a3=ffffb1ab86c0 items=0 ppid=2815 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:33.781000 audit[3758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffd12eb420 a2=0 a3=ffffb1ab86c0 items=0 ppid=2815 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:33.781000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:33.911253 kernel: audit: type=1327 audit(1707503673.781:294): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:35.641448 env[1440]: time="2024-02-09T18:34:35.641401692Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:35.657197 env[1440]: time="2024-02-09T18:34:35.657158427Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:35.664203 env[1440]: time="2024-02-09T18:34:35.664169834Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:35.674080 env[1440]: time="2024-02-09T18:34:35.674041083Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:35.674627 env[1440]: time="2024-02-09T18:34:35.674598724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 9 18:34:35.677191 env[1440]: time="2024-02-09T18:34:35.677115046Z" level=info msg="CreateContainer within sandbox \"382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 18:34:35.719399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834855490.mount: Deactivated successfully. Feb 9 18:34:35.758316 env[1440]: time="2024-02-09T18:34:35.758267563Z" level=info msg="CreateContainer within sandbox \"382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"22f355d5a208499dbf552d34b481ce545b6aece649f6bae262b98c2b0cbaebdd\"" Feb 9 18:34:35.759271 env[1440]: time="2024-02-09T18:34:35.759245564Z" level=info msg="StartContainer for \"22f355d5a208499dbf552d34b481ce545b6aece649f6bae262b98c2b0cbaebdd\"" Feb 9 18:34:35.816147 kubelet[2612]: E0209 18:34:35.816107 2612 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:35.837573 env[1440]: time="2024-02-09T18:34:35.837519078Z" level=info msg="StartContainer for \"22f355d5a208499dbf552d34b481ce545b6aece649f6bae262b98c2b0cbaebdd\" returns successfully" Feb 9 18:34:36.716678 systemd[1]: run-containerd-runc-k8s.io-22f355d5a208499dbf552d34b481ce545b6aece649f6bae262b98c2b0cbaebdd-runc.phQkb2.mount: Deactivated successfully. Feb 9 18:34:37.072866 env[1440]: time="2024-02-09T18:34:37.072736869Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 18:34:37.076920 kubelet[2612]: I0209 18:34:37.076893 2612 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 18:34:37.113079 kubelet[2612]: I0209 18:34:37.112504 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:37.113079 kubelet[2612]: I0209 18:34:37.112670 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:37.113079 kubelet[2612]: I0209 18:34:37.112907 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:37.118387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22f355d5a208499dbf552d34b481ce545b6aece649f6bae262b98c2b0cbaebdd-rootfs.mount: Deactivated successfully. Feb 9 18:34:37.183580 kubelet[2612]: I0209 18:34:37.183541 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c5b7cae-5788-46fc-a067-48ba7ee335bb-config-volume\") pod \"coredns-787d4945fb-8fllh\" (UID: \"9c5b7cae-5788-46fc-a067-48ba7ee335bb\") " pod="kube-system/coredns-787d4945fb-8fllh" Feb 9 18:34:37.183764 kubelet[2612]: I0209 18:34:37.183593 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8tz\" (UniqueName: \"kubernetes.io/projected/bf591268-bd51-4cdc-be95-b7338d5e9351-kube-api-access-8n8tz\") pod \"coredns-787d4945fb-hwl75\" (UID: \"bf591268-bd51-4cdc-be95-b7338d5e9351\") " pod="kube-system/coredns-787d4945fb-hwl75" Feb 9 18:34:37.183764 kubelet[2612]: I0209 18:34:37.183623 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41326e56-a6a2-4833-a012-a189fa6577b6-tigera-ca-bundle\") pod \"calico-kube-controllers-6496d9846c-w2h2q\" (UID: \"41326e56-a6a2-4833-a012-a189fa6577b6\") " pod="calico-system/calico-kube-controllers-6496d9846c-w2h2q" Feb 9 18:34:37.183764 kubelet[2612]: I0209 18:34:37.183645 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf591268-bd51-4cdc-be95-b7338d5e9351-config-volume\") pod \"coredns-787d4945fb-hwl75\" (UID: \"bf591268-bd51-4cdc-be95-b7338d5e9351\") " pod="kube-system/coredns-787d4945fb-hwl75" Feb 9 18:34:37.183764 kubelet[2612]: I0209 18:34:37.183668 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcc77\" (UniqueName: \"kubernetes.io/projected/41326e56-a6a2-4833-a012-a189fa6577b6-kube-api-access-jcc77\") pod \"calico-kube-controllers-6496d9846c-w2h2q\" (UID: \"41326e56-a6a2-4833-a012-a189fa6577b6\") " pod="calico-system/calico-kube-controllers-6496d9846c-w2h2q" Feb 9 18:34:37.183764 kubelet[2612]: I0209 18:34:37.183689 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmpwk\" (UniqueName: \"kubernetes.io/projected/9c5b7cae-5788-46fc-a067-48ba7ee335bb-kube-api-access-kmpwk\") pod \"coredns-787d4945fb-8fllh\" (UID: \"9c5b7cae-5788-46fc-a067-48ba7ee335bb\") " pod="kube-system/coredns-787d4945fb-8fllh" Feb 9 18:34:37.434064 env[1440]: time="2024-02-09T18:34:37.433928879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d9846c-w2h2q,Uid:41326e56-a6a2-4833-a012-a189fa6577b6,Namespace:calico-system,Attempt:0,}" Feb 9 18:34:37.441755 env[1440]: time="2024-02-09T18:34:37.441534486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8fllh,Uid:9c5b7cae-5788-46fc-a067-48ba7ee335bb,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:37.444656 env[1440]: time="2024-02-09T18:34:37.444486649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hwl75,Uid:bf591268-bd51-4cdc-be95-b7338d5e9351,Namespace:kube-system,Attempt:0,}" Feb 9 18:34:38.215684 env[1440]: time="2024-02-09T18:34:38.215205031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hwhv,Uid:844edbd0-8ef3-4fe8-912a-b7cf2c34e24c,Namespace:calico-system,Attempt:0,}" Feb 9 18:34:38.278320 env[1440]: time="2024-02-09T18:34:38.278276248Z" level=info msg="shim disconnected" id=22f355d5a208499dbf552d34b481ce545b6aece649f6bae262b98c2b0cbaebdd Feb 9 18:34:38.278495 env[1440]: time="2024-02-09T18:34:38.278479088Z" level=warning msg="cleaning up after shim disconnected" id=22f355d5a208499dbf552d34b481ce545b6aece649f6bae262b98c2b0cbaebdd namespace=k8s.io Feb 9 18:34:38.278552 env[1440]: time="2024-02-09T18:34:38.278540528Z" level=info msg="cleaning up dead shim" Feb 9 18:34:38.286588 env[1440]: time="2024-02-09T18:34:38.286545135Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:34:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3820 runtime=io.containerd.runc.v2\n" Feb 9 18:34:38.521181 env[1440]: time="2024-02-09T18:34:38.520731026Z" level=error msg="Failed to destroy network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.521181 env[1440]: time="2024-02-09T18:34:38.521093426Z" level=error msg="encountered an error cleaning up failed sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.521181 env[1440]: time="2024-02-09T18:34:38.521136266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d9846c-w2h2q,Uid:41326e56-a6a2-4833-a012-a189fa6577b6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.521849 kubelet[2612]: E0209 18:34:38.521504 2612 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.521849 kubelet[2612]: E0209 18:34:38.521563 2612 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6496d9846c-w2h2q" Feb 9 18:34:38.521849 kubelet[2612]: E0209 18:34:38.521585 2612 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6496d9846c-w2h2q" Feb 9 18:34:38.522236 kubelet[2612]: E0209 18:34:38.521634 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6496d9846c-w2h2q_calico-system(41326e56-a6a2-4833-a012-a189fa6577b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6496d9846c-w2h2q_calico-system(41326e56-a6a2-4833-a012-a189fa6577b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6496d9846c-w2h2q" podUID=41326e56-a6a2-4833-a012-a189fa6577b6 Feb 9 18:34:38.572811 env[1440]: time="2024-02-09T18:34:38.572748313Z" level=error msg="Failed to destroy network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.573144 env[1440]: time="2024-02-09T18:34:38.573109353Z" level=error msg="encountered an error cleaning up failed sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.573187 env[1440]: time="2024-02-09T18:34:38.573164433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8fllh,Uid:9c5b7cae-5788-46fc-a067-48ba7ee335bb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.573875 kubelet[2612]: E0209 18:34:38.573406 2612 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.573875 kubelet[2612]: E0209 18:34:38.573454 2612 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-8fllh" Feb 9 18:34:38.573875 kubelet[2612]: E0209 18:34:38.573473 2612 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-8fllh" Feb 9 18:34:38.574068 kubelet[2612]: E0209 18:34:38.573530 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-8fllh_kube-system(9c5b7cae-5788-46fc-a067-48ba7ee335bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-8fllh_kube-system(9c5b7cae-5788-46fc-a067-48ba7ee335bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-8fllh" podUID=9c5b7cae-5788-46fc-a067-48ba7ee335bb Feb 9 18:34:38.601220 env[1440]: time="2024-02-09T18:34:38.601150498Z" level=error msg="Failed to destroy network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.601540 env[1440]: time="2024-02-09T18:34:38.601503299Z" level=error msg="encountered an error cleaning up failed sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.601576 env[1440]: time="2024-02-09T18:34:38.601555619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hwhv,Uid:844edbd0-8ef3-4fe8-912a-b7cf2c34e24c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.603008 kubelet[2612]: E0209 18:34:38.601775 2612 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.603008 kubelet[2612]: E0209 18:34:38.601881 2612 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7hwhv" Feb 9 18:34:38.603008 kubelet[2612]: E0209 18:34:38.601904 2612 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7hwhv" Feb 9 18:34:38.603164 kubelet[2612]: E0209 18:34:38.601947 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7hwhv_calico-system(844edbd0-8ef3-4fe8-912a-b7cf2c34e24c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7hwhv_calico-system(844edbd0-8ef3-4fe8-912a-b7cf2c34e24c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:38.608894 env[1440]: time="2024-02-09T18:34:38.608846825Z" level=error msg="Failed to destroy network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.609319 env[1440]: time="2024-02-09T18:34:38.609288466Z" level=error msg="encountered an error cleaning up failed sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.609448 env[1440]: time="2024-02-09T18:34:38.609422466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hwl75,Uid:bf591268-bd51-4cdc-be95-b7338d5e9351,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.610970 kubelet[2612]: E0209 18:34:38.609691 2612 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:38.610970 kubelet[2612]: E0209 18:34:38.609750 2612 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-hwl75" Feb 9 18:34:38.610970 kubelet[2612]: E0209 18:34:38.609775 2612 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-hwl75" Feb 9 18:34:38.611173 kubelet[2612]: E0209 18:34:38.609835 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-hwl75_kube-system(bf591268-bd51-4cdc-be95-b7338d5e9351)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-hwl75_kube-system(bf591268-bd51-4cdc-be95-b7338d5e9351)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-hwl75" podUID=bf591268-bd51-4cdc-be95-b7338d5e9351 Feb 9 18:34:38.990720 kubelet[2612]: I0209 18:34:38.989363 2612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:34:38.990871 env[1440]: time="2024-02-09T18:34:38.990097088Z" level=info msg="StopPodSandbox for \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\"" Feb 9 18:34:38.994296 kubelet[2612]: I0209 18:34:38.993116 2612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:34:38.995868 env[1440]: time="2024-02-09T18:34:38.995833294Z" level=info msg="StopPodSandbox for \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\"" Feb 9 18:34:38.998359 kubelet[2612]: I0209 18:34:38.997066 2612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:34:38.999353 env[1440]: time="2024-02-09T18:34:38.999323417Z" level=info msg="StopPodSandbox for \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\"" Feb 9 18:34:39.004978 env[1440]: time="2024-02-09T18:34:39.004183421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 9 18:34:39.005084 kubelet[2612]: I0209 18:34:39.004918 2612 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:34:39.005779 env[1440]: time="2024-02-09T18:34:39.005409062Z" level=info msg="StopPodSandbox for \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\"" Feb 9 18:34:39.069479 env[1440]: time="2024-02-09T18:34:39.069411199Z" level=error msg="StopPodSandbox for \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\" failed" error="failed to destroy network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:39.069798 kubelet[2612]: E0209 18:34:39.069766 2612 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:34:39.069866 kubelet[2612]: E0209 18:34:39.069827 2612 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499} Feb 9 18:34:39.069866 kubelet[2612]: E0209 18:34:39.069864 2612 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf591268-bd51-4cdc-be95-b7338d5e9351\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:34:39.069959 kubelet[2612]: E0209 18:34:39.069913 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf591268-bd51-4cdc-be95-b7338d5e9351\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-hwl75" podUID=bf591268-bd51-4cdc-be95-b7338d5e9351 Feb 9 18:34:39.070440 env[1440]: time="2024-02-09T18:34:39.070397400Z" level=error msg="StopPodSandbox for \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\" failed" error="failed to destroy network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:39.070702 kubelet[2612]: E0209 18:34:39.070598 2612 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:34:39.070702 kubelet[2612]: E0209 18:34:39.070625 2612 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3} Feb 9 18:34:39.070702 kubelet[2612]: E0209 18:34:39.070655 2612 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"41326e56-a6a2-4833-a012-a189fa6577b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:34:39.070702 kubelet[2612]: E0209 18:34:39.070678 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"41326e56-a6a2-4833-a012-a189fa6577b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6496d9846c-w2h2q" podUID=41326e56-a6a2-4833-a012-a189fa6577b6 Feb 9 18:34:39.084483 env[1440]: time="2024-02-09T18:34:39.084431212Z" level=error msg="StopPodSandbox for \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\" failed" error="failed to destroy network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:39.085177 kubelet[2612]: E0209 18:34:39.085141 2612 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:34:39.085251 kubelet[2612]: E0209 18:34:39.085188 2612 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb} Feb 9 18:34:39.085251 kubelet[2612]: E0209 18:34:39.085221 2612 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:34:39.085251 kubelet[2612]: E0209 18:34:39.085247 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7hwhv" podUID=844edbd0-8ef3-4fe8-912a-b7cf2c34e24c Feb 9 18:34:39.086804 env[1440]: time="2024-02-09T18:34:39.086770014Z" level=error msg="StopPodSandbox for \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\" failed" error="failed to destroy network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 9 18:34:39.087060 kubelet[2612]: E0209 18:34:39.087045 2612 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:34:39.087159 kubelet[2612]: E0209 18:34:39.087148 2612 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac} Feb 9 18:34:39.087237 kubelet[2612]: E0209 18:34:39.087228 2612 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c5b7cae-5788-46fc-a067-48ba7ee335bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 9 18:34:39.087336 kubelet[2612]: E0209 18:34:39.087326 2612 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c5b7cae-5788-46fc-a067-48ba7ee335bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-8fllh" podUID=9c5b7cae-5788-46fc-a067-48ba7ee335bb Feb 9 18:34:39.415697 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499-shm.mount: Deactivated successfully. Feb 9 18:34:39.415842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac-shm.mount: Deactivated successfully. Feb 9 18:34:39.415924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3-shm.mount: Deactivated successfully. Feb 9 18:34:44.344095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198410258.mount: Deactivated successfully. Feb 9 18:34:44.585756 env[1440]: time="2024-02-09T18:34:44.585714386Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.601779 env[1440]: time="2024-02-09T18:34:44.601560639Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.610734 env[1440]: time="2024-02-09T18:34:44.610696087Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.625342 env[1440]: time="2024-02-09T18:34:44.625305579Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:44.626100 env[1440]: time="2024-02-09T18:34:44.626073179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 9 18:34:44.640805 env[1440]: time="2024-02-09T18:34:44.640770191Z" level=info msg="CreateContainer within sandbox \"382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 9 18:34:44.681272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779986256.mount: Deactivated successfully. Feb 9 18:34:44.714148 env[1440]: time="2024-02-09T18:34:44.714102971Z" level=info msg="CreateContainer within sandbox \"382540a7a61b0c1664b0df450e6a8b3174f52a5f3565f119a4bfe73fb97ee10c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc\"" Feb 9 18:34:44.716283 env[1440]: time="2024-02-09T18:34:44.714945652Z" level=info msg="StartContainer for \"b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc\"" Feb 9 18:34:44.771930 env[1440]: time="2024-02-09T18:34:44.771869699Z" level=info msg="StartContainer for \"b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc\" returns successfully" Feb 9 18:34:44.933726 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 9 18:34:44.933859 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 9 18:34:45.037226 kubelet[2612]: I0209 18:34:45.037174 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-xzz92" podStartSLOduration=-9.223372021817636e+09 pod.CreationTimestamp="2024-02-09 18:34:30 +0000 UTC" firstStartedPulling="2024-02-09 18:34:31.978109361 +0000 UTC m=+30.429706297" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:45.036368275 +0000 UTC m=+43.487965211" watchObservedRunningTime="2024-02-09 18:34:45.037138635 +0000 UTC m=+43.488735571" Feb 9 18:34:46.320000 audit[4169]: AVC avc: denied { write } for pid=4169 comm="tee" name="fd" dev="proc" ino=25999 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.320000 audit[4166]: AVC avc: denied { write } for pid=4166 comm="tee" name="fd" dev="proc" ino=26641 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.371875 kernel: audit: type=1400 audit(1707503686.320:295): avc: denied { write } for pid=4169 comm="tee" name="fd" dev="proc" ino=25999 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.372004 kernel: audit: type=1400 audit(1707503686.320:296): avc: denied { write } for pid=4166 comm="tee" name="fd" dev="proc" ino=26641 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.320000 audit[4166]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff2332975 a2=241 a3=1b6 items=1 ppid=4122 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.407796 kernel: audit: type=1300 audit(1707503686.320:296): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff2332975 a2=241 a3=1b6 items=1 ppid=4122 pid=4166 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.320000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 9 18:34:46.320000 audit: PATH item=0 name="/dev/fd/63" inode=25592 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.459969 kernel: audit: type=1307 audit(1707503686.320:296): cwd="/etc/service/enabled/bird6/log" Feb 9 18:34:46.460087 kernel: audit: type=1302 audit(1707503686.320:296): item=0 name="/dev/fd/63" inode=25592 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:46.485890 kernel: audit: type=1327 audit(1707503686.320:296): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:46.336000 audit[4176]: AVC avc: denied { write } for pid=4176 comm="tee" name="fd" dev="proc" ino=26648 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.509363 kernel: audit: type=1400 audit(1707503686.336:298): avc: denied { write } for pid=4176 comm="tee" name="fd" dev="proc" ino=26648 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.336000 audit[4176]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe9b47975 a2=241 a3=1b6 items=1 ppid=4120 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.547400 kernel: audit: type=1300 audit(1707503686.336:298): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe9b47975 a2=241 a3=1b6 items=1 ppid=4120 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.336000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 9 18:34:46.566450 kernel: audit: type=1307 audit(1707503686.336:298): cwd="/etc/service/enabled/felix/log" Feb 9 18:34:46.336000 audit: PATH item=0 name="/dev/fd/63" inode=26637 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.587715 kernel: audit: type=1302 audit(1707503686.336:298): item=0 name="/dev/fd/63" inode=26637 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:46.336000 audit[4171]: AVC avc: denied { write } for pid=4171 comm="tee" name="fd" dev="proc" ino=26647 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.336000 audit[4171]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe4126965 a2=241 a3=1b6 items=1 ppid=4117 pid=4171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.336000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 9 18:34:46.336000 audit: PATH item=0 name="/dev/fd/63" inode=26632 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.336000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:46.320000 audit[4169]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff899e976 a2=241 a3=1b6 items=1 ppid=4126 pid=4169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.320000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 9 18:34:46.320000 audit: PATH item=0 name="/dev/fd/63" inode=25990 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.320000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:46.349000 audit[4173]: AVC avc: denied { write } for pid=4173 comm="tee" name="fd" dev="proc" ino=26653 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.349000 audit[4173]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff8e22966 a2=241 a3=1b6 items=1 ppid=4115 pid=4173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.349000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 9 18:34:46.349000 audit: PATH item=0 name="/dev/fd/63" inode=25993 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.349000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:46.358000 audit[4178]: AVC avc: denied { write } for pid=4178 comm="tee" name="fd" dev="proc" ino=26004 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.358000 audit[4178]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdb73d975 a2=241 a3=1b6 items=1 ppid=4128 pid=4178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.358000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 9 18:34:46.358000 audit: PATH item=0 name="/dev/fd/63" inode=26640 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.358000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:46.360000 audit[4186]: AVC avc: denied { write } for pid=4186 comm="tee" name="fd" dev="proc" ino=26008 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 9 18:34:46.360000 audit[4186]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffda1ca977 a2=241 a3=1b6 items=1 ppid=4119 pid=4186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:46.360000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 9 18:34:46.360000 audit: PATH item=0 name="/dev/fd/63" inode=26000 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 18:34:46.360000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit: BPF prog-id=10 op=LOAD Feb 9 18:34:47.176000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeee890f8 a2=70 a3=0 items=0 ppid=4133 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.176000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:34:47.176000 audit: BPF prog-id=10 op=UNLOAD Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit: BPF prog-id=11 op=LOAD Feb 9 18:34:47.176000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeee890f8 a2=70 a3=4a174c items=0 ppid=4133 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.176000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:34:47.176000 audit: BPF prog-id=11 op=UNLOAD Feb 9 18:34:47.176000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.176000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffeee89128 a2=70 a3=c3c579f items=0 ppid=4133 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.176000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { perfmon } for pid=4256 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit[4256]: AVC avc: denied { bpf } for pid=4256 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.177000 audit: BPF prog-id=12 op=LOAD Feb 9 18:34:47.177000 audit[4256]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffeee89078 a2=70 a3=c3c57b9 items=0 ppid=4133 pid=4256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.177000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 9 18:34:47.179000 audit[4258]: AVC avc: denied { bpf } for pid=4258 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.179000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd31c4d98 a2=70 a3=0 items=0 ppid=4133 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.179000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 18:34:47.179000 audit[4258]: AVC avc: denied { bpf } for pid=4258 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 9 18:34:47.179000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffd31c4c78 a2=70 a3=2 items=0 ppid=4133 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.179000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 9 18:34:47.184000 audit: BPF prog-id=12 op=UNLOAD Feb 9 18:34:47.194243 kubelet[2612]: I0209 18:34:47.194202 2612 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 9 18:34:47.231679 systemd[1]: run-containerd-runc-k8s.io-b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc-runc.4RJVYW.mount: Deactivated successfully. Feb 9 18:34:47.313000 audit[4317]: NETFILTER_CFG table=mangle:118 family=2 entries=19 op=nft_register_chain pid=4317 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:47.313000 audit[4317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=fffff4007980 a2=0 a3=ffffb57e6fa8 items=0 ppid=4133 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.313000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:47.321000 audit[4316]: NETFILTER_CFG table=nat:119 family=2 entries=16 op=nft_register_chain pid=4316 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:47.321000 audit[4316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffccce5e60 a2=0 a3=ffffbcdb1fa8 items=0 ppid=4133 pid=4316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.321000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:47.326000 audit[4327]: NETFILTER_CFG table=filter:120 family=2 entries=39 op=nft_register_chain pid=4327 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:47.326000 audit[4327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18472 a0=3 a1=fffff4b1c3d0 a2=0 a3=ffffa2c3efa8 items=0 ppid=4133 pid=4327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.326000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:47.346000 audit[4315]: NETFILTER_CFG table=raw:121 family=2 entries=19 op=nft_register_chain pid=4315 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:47.346000 audit[4315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=ffffc2ae6800 a2=0 a3=ffff97108fa8 items=0 ppid=4133 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:47.346000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:47.852695 systemd-networkd[1604]: vxlan.calico: Link UP Feb 9 18:34:47.852709 systemd-networkd[1604]: vxlan.calico: Gained carrier Feb 9 18:34:48.226254 systemd[1]: run-containerd-runc-k8s.io-b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc-runc.pR5nLY.mount: Deactivated successfully. Feb 9 18:34:48.979141 systemd-networkd[1604]: vxlan.calico: Gained IPv6LL Feb 9 18:34:51.815182 env[1440]: time="2024-02-09T18:34:51.815121042Z" level=info msg="StopPodSandbox for \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\"" Feb 9 18:34:51.815543 env[1440]: time="2024-02-09T18:34:51.815276962Z" level=info msg="StopPodSandbox for \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\"" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.886 [INFO][4367] k8s.go 578: Cleaning up netns ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.886 [INFO][4367] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" iface="eth0" netns="/var/run/netns/cni-7e73c2b9-a88e-ca5a-c3f1-4793b8b9ace1" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.886 [INFO][4367] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" iface="eth0" netns="/var/run/netns/cni-7e73c2b9-a88e-ca5a-c3f1-4793b8b9ace1" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.886 [INFO][4367] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" iface="eth0" netns="/var/run/netns/cni-7e73c2b9-a88e-ca5a-c3f1-4793b8b9ace1" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.886 [INFO][4367] k8s.go 585: Releasing IP address(es) ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.886 [INFO][4367] utils.go 188: Calico CNI releasing IP address ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.909 [INFO][4386] ipam_plugin.go 415: Releasing address using handleID ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.910 [INFO][4386] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.910 [INFO][4386] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.919 [WARNING][4386] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.919 [INFO][4386] ipam_plugin.go 443: Releasing address using workloadID ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.921 [INFO][4386] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:51.928660 env[1440]: 2024-02-09 18:34:51.927 [INFO][4367] k8s.go 591: Teardown processing complete. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:34:51.934204 env[1440]: time="2024-02-09T18:34:51.934156690Z" level=info msg="TearDown network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\" successfully" Feb 9 18:34:51.934355 env[1440]: time="2024-02-09T18:34:51.934333810Z" level=info msg="StopPodSandbox for \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\" returns successfully" Feb 9 18:34:51.935028 env[1440]: time="2024-02-09T18:34:51.935002811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d9846c-w2h2q,Uid:41326e56-a6a2-4833-a012-a189fa6577b6,Namespace:calico-system,Attempt:1,}" Feb 9 18:34:51.935047 systemd[1]: run-netns-cni\x2d7e73c2b9\x2da88e\x2dca5a\x2dc3f1\x2d4793b8b9ace1.mount: Deactivated successfully. Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.879 [INFO][4368] k8s.go 578: Cleaning up netns ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.879 [INFO][4368] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" iface="eth0" netns="/var/run/netns/cni-c91ae356-b0db-0549-bc67-0b3097ea93d4" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.879 [INFO][4368] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" iface="eth0" netns="/var/run/netns/cni-c91ae356-b0db-0549-bc67-0b3097ea93d4" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.880 [INFO][4368] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" iface="eth0" netns="/var/run/netns/cni-c91ae356-b0db-0549-bc67-0b3097ea93d4" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.880 [INFO][4368] k8s.go 585: Releasing IP address(es) ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.880 [INFO][4368] utils.go 188: Calico CNI releasing IP address ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.911 [INFO][4381] ipam_plugin.go 415: Releasing address using handleID ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.911 [INFO][4381] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.920 [INFO][4381] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.930 [WARNING][4381] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.930 [INFO][4381] ipam_plugin.go 443: Releasing address using workloadID ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.936 [INFO][4381] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:51.938891 env[1440]: 2024-02-09 18:34:51.937 [INFO][4368] k8s.go 591: Teardown processing complete. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:34:51.941299 systemd[1]: run-netns-cni\x2dc91ae356\x2db0db\x2d0549\x2dbc67\x2d0b3097ea93d4.mount: Deactivated successfully. Feb 9 18:34:51.941644 env[1440]: time="2024-02-09T18:34:51.941617496Z" level=info msg="TearDown network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\" successfully" Feb 9 18:34:51.941719 env[1440]: time="2024-02-09T18:34:51.941704256Z" level=info msg="StopPodSandbox for \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\" returns successfully" Feb 9 18:34:51.942405 env[1440]: time="2024-02-09T18:34:51.942373896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hwhv,Uid:844edbd0-8ef3-4fe8-912a-b7cf2c34e24c,Namespace:calico-system,Attempt:1,}" Feb 9 18:34:52.190582 systemd-networkd[1604]: cali571278ad65b: Link UP Feb 9 18:34:52.205655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:34:52.205776 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali571278ad65b: link becomes ready Feb 9 18:34:52.206426 systemd-networkd[1604]: cali571278ad65b: Gained carrier Feb 9 18:34:52.260337 systemd-networkd[1604]: cali6eb5fd23ccb: Link UP Feb 9 18:34:52.272219 systemd-networkd[1604]: cali6eb5fd23ccb: Gained carrier Feb 9 18:34:52.273988 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali6eb5fd23ccb: link becomes ready Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.088 [INFO][4393] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0 csi-node-driver- calico-system 844edbd0-8ef3-4fe8-912a-b7cf2c34e24c 765 0 2024-02-09 18:34:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3510.3.2-a-e8e52debc2 csi-node-driver-7hwhv eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali571278ad65b [] []}} ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.088 [INFO][4393] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.131 [INFO][4415] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" HandleID="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.144 [INFO][4415] ipam_plugin.go 268: Auto assigning IP ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" HandleID="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000203730), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-e8e52debc2", "pod":"csi-node-driver-7hwhv", "timestamp":"2024-02-09 18:34:52.131859036 +0000 UTC"}, Hostname:"ci-3510.3.2-a-e8e52debc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.144 [INFO][4415] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.144 [INFO][4415] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.144 [INFO][4415] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-e8e52debc2' Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.145 [INFO][4415] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.154 [INFO][4415] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.160 [INFO][4415] ipam.go 489: Trying affinity for 192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.161 [INFO][4415] ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.163 [INFO][4415] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.163 [INFO][4415] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.164 [INFO][4415] ipam.go 1682: Creating new handle: k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.172 [INFO][4415] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.179 [INFO][4415] ipam.go 1216: Successfully claimed IPs: [192.168.32.1/26] block=192.168.32.0/26 handle="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.179 [INFO][4415] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.1/26] handle="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.179 [INFO][4415] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:52.291943 env[1440]: 2024-02-09 18:34:52.179 [INFO][4415] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.32.1/26] IPv6=[] ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" HandleID="k8s-pod-network.c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:52.292566 env[1440]: 2024-02-09 18:34:52.185 [INFO][4393] k8s.go 385: Populated endpoint ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"", Pod:"csi-node-driver-7hwhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali571278ad65b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:52.292566 env[1440]: 2024-02-09 18:34:52.185 [INFO][4393] k8s.go 386: Calico CNI using IPs: [192.168.32.1/32] ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:52.292566 env[1440]: 2024-02-09 18:34:52.185 [INFO][4393] dataplane_linux.go 68: Setting the host side veth name to cali571278ad65b ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:52.292566 env[1440]: 2024-02-09 18:34:52.241 [INFO][4393] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:52.292566 env[1440]: 2024-02-09 18:34:52.241 [INFO][4393] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d", Pod:"csi-node-driver-7hwhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali571278ad65b", MAC:"26:5f:38:29:00:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:52.292566 env[1440]: 2024-02-09 18:34:52.290 [INFO][4393] k8s.go 491: Wrote updated endpoint to datastore ContainerID="c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d" Namespace="calico-system" Pod="csi-node-driver-7hwhv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.114 [INFO][4403] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0 calico-kube-controllers-6496d9846c- calico-system 41326e56-a6a2-4833-a012-a189fa6577b6 766 0 2024-02-09 18:34:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6496d9846c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.2-a-e8e52debc2 calico-kube-controllers-6496d9846c-w2h2q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6eb5fd23ccb [] []}} ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.114 [INFO][4403] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.153 [INFO][4421] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" HandleID="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.174 [INFO][4421] ipam_plugin.go 268: Auto assigning IP ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" HandleID="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012dfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.2-a-e8e52debc2", "pod":"calico-kube-controllers-6496d9846c-w2h2q", "timestamp":"2024-02-09 18:34:52.153669612 +0000 UTC"}, Hostname:"ci-3510.3.2-a-e8e52debc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.174 [INFO][4421] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.179 [INFO][4421] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.179 [INFO][4421] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-e8e52debc2' Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.181 [INFO][4421] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.184 [INFO][4421] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.198 [INFO][4421] ipam.go 489: Trying affinity for 192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.205 [INFO][4421] ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.217 [INFO][4421] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.217 [INFO][4421] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.220 [INFO][4421] ipam.go 1682: Creating new handle: k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0 Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.224 [INFO][4421] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.228 [INFO][4421] ipam.go 1216: Successfully claimed IPs: [192.168.32.2/26] block=192.168.32.0/26 handle="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.228 [INFO][4421] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.2/26] handle="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.228 [INFO][4421] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:52.298940 env[1440]: 2024-02-09 18:34:52.228 [INFO][4421] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.32.2/26] IPv6=[] ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" HandleID="k8s-pod-network.47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:52.299507 env[1440]: 2024-02-09 18:34:52.242 [INFO][4403] k8s.go 385: Populated endpoint ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0", GenerateName:"calico-kube-controllers-6496d9846c-", Namespace:"calico-system", SelfLink:"", UID:"41326e56-a6a2-4833-a012-a189fa6577b6", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d9846c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"", Pod:"calico-kube-controllers-6496d9846c-w2h2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6eb5fd23ccb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:52.299507 env[1440]: 2024-02-09 18:34:52.242 [INFO][4403] k8s.go 386: Calico CNI using IPs: [192.168.32.2/32] ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:52.299507 env[1440]: 2024-02-09 18:34:52.242 [INFO][4403] dataplane_linux.go 68: Setting the host side veth name to cali6eb5fd23ccb ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:52.299507 env[1440]: 2024-02-09 18:34:52.276 [INFO][4403] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:52.299507 env[1440]: 2024-02-09 18:34:52.276 [INFO][4403] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0", GenerateName:"calico-kube-controllers-6496d9846c-", Namespace:"calico-system", SelfLink:"", UID:"41326e56-a6a2-4833-a012-a189fa6577b6", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d9846c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0", Pod:"calico-kube-controllers-6496d9846c-w2h2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6eb5fd23ccb", MAC:"2e:d5:2b:3b:82:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:52.299507 env[1440]: 2024-02-09 18:34:52.297 [INFO][4403] k8s.go 491: Wrote updated endpoint to datastore ContainerID="47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0" Namespace="calico-system" Pod="calico-kube-controllers-6496d9846c-w2h2q" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:34:52.341749 env[1440]: time="2024-02-09T18:34:52.341695429Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:52.342163 env[1440]: time="2024-02-09T18:34:52.342133870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:52.342287 env[1440]: time="2024-02-09T18:34:52.342264110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:52.352116 env[1440]: time="2024-02-09T18:34:52.349969875Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d pid=4460 runtime=io.containerd.runc.v2 Feb 9 18:34:52.353423 env[1440]: time="2024-02-09T18:34:52.353348038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:52.353423 env[1440]: time="2024-02-09T18:34:52.353388238Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:52.353423 env[1440]: time="2024-02-09T18:34:52.353399398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:52.354488 env[1440]: time="2024-02-09T18:34:52.354426799Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0 pid=4473 runtime=io.containerd.runc.v2 Feb 9 18:34:52.457224 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 9 18:34:52.457447 kernel: audit: type=1325 audit(1707503692.436:315): table=filter:122 family=2 entries=62 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:52.436000 audit[4525]: NETFILTER_CFG table=filter:122 family=2 entries=62 op=nft_register_chain pid=4525 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:52.436000 audit[4525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=35296 a0=3 a1=fffff2747490 a2=0 a3=ffffb03ccfa8 items=0 ppid=4133 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:52.494630 kernel: audit: type=1300 audit(1707503692.436:315): arch=c00000b7 syscall=211 success=yes exit=35296 a0=3 a1=fffff2747490 a2=0 a3=ffffb03ccfa8 items=0 ppid=4133 pid=4525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:52.436000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:52.506266 env[1440]: time="2024-02-09T18:34:52.506228510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7hwhv,Uid:844edbd0-8ef3-4fe8-912a-b7cf2c34e24c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d\"" Feb 9 18:34:52.507905 env[1440]: time="2024-02-09T18:34:52.507879111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 9 18:34:52.516280 kernel: audit: type=1327 audit(1707503692.436:315): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:52.561490 env[1440]: time="2024-02-09T18:34:52.561414510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6496d9846c-w2h2q,Uid:41326e56-a6a2-4833-a012-a189fa6577b6,Namespace:calico-system,Attempt:1,} returns sandbox id \"47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0\"" Feb 9 18:34:52.815152 env[1440]: time="2024-02-09T18:34:52.815036616Z" level=info msg="StopPodSandbox for \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\"" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.860 [INFO][4557] k8s.go 578: Cleaning up netns ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.860 [INFO][4557] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" iface="eth0" netns="/var/run/netns/cni-2df812b3-6c8b-d7fd-9062-ded9f23fdb34" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.860 [INFO][4557] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" iface="eth0" netns="/var/run/netns/cni-2df812b3-6c8b-d7fd-9062-ded9f23fdb34" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.860 [INFO][4557] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" iface="eth0" netns="/var/run/netns/cni-2df812b3-6c8b-d7fd-9062-ded9f23fdb34" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.860 [INFO][4557] k8s.go 585: Releasing IP address(es) ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.861 [INFO][4557] utils.go 188: Calico CNI releasing IP address ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.879 [INFO][4563] ipam_plugin.go 415: Releasing address using handleID ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.879 [INFO][4563] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.879 [INFO][4563] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.889 [WARNING][4563] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.889 [INFO][4563] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.890 [INFO][4563] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:52.893466 env[1440]: 2024-02-09 18:34:52.891 [INFO][4557] k8s.go 591: Teardown processing complete. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:34:52.894373 env[1440]: time="2024-02-09T18:34:52.894328194Z" level=info msg="TearDown network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\" successfully" Feb 9 18:34:52.894493 env[1440]: time="2024-02-09T18:34:52.894474554Z" level=info msg="StopPodSandbox for \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\" returns successfully" Feb 9 18:34:52.895199 env[1440]: time="2024-02-09T18:34:52.895173394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hwl75,Uid:bf591268-bd51-4cdc-be95-b7338d5e9351,Namespace:kube-system,Attempt:1,}" Feb 9 18:34:52.934258 systemd[1]: run-netns-cni\x2d2df812b3\x2d6c8b\x2dd7fd\x2d9062\x2dded9f23fdb34.mount: Deactivated successfully. Feb 9 18:34:53.085455 systemd-networkd[1604]: calib16ff9cdb94: Link UP Feb 9 18:34:53.095776 systemd-networkd[1604]: calib16ff9cdb94: Gained carrier Feb 9 18:34:53.096019 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib16ff9cdb94: link becomes ready Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.017 [INFO][4570] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0 coredns-787d4945fb- kube-system bf591268-bd51-4cdc-be95-b7338d5e9351 780 0 2024-02-09 18:34:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-e8e52debc2 coredns-787d4945fb-hwl75 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib16ff9cdb94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.017 [INFO][4570] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.044 [INFO][4584] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" HandleID="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.056 [INFO][4584] ipam_plugin.go 268: Auto assigning IP ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" HandleID="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400021f9a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-e8e52debc2", "pod":"coredns-787d4945fb-hwl75", "timestamp":"2024-02-09 18:34:53.044572343 +0000 UTC"}, Hostname:"ci-3510.3.2-a-e8e52debc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.056 [INFO][4584] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.056 [INFO][4584] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.056 [INFO][4584] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-e8e52debc2' Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.058 [INFO][4584] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.061 [INFO][4584] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.066 [INFO][4584] ipam.go 489: Trying affinity for 192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.067 [INFO][4584] ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.070 [INFO][4584] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.070 [INFO][4584] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.071 [INFO][4584] ipam.go 1682: Creating new handle: k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.074 [INFO][4584] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.079 [INFO][4584] ipam.go 1216: Successfully claimed IPs: [192.168.32.3/26] block=192.168.32.0/26 handle="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.079 [INFO][4584] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.3/26] handle="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.079 [INFO][4584] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:53.111078 env[1440]: 2024-02-09 18:34:53.079 [INFO][4584] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.32.3/26] IPv6=[] ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" HandleID="k8s-pod-network.f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:53.111638 env[1440]: 2024-02-09 18:34:53.082 [INFO][4570] k8s.go 385: Populated endpoint ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bf591268-bd51-4cdc-be95-b7338d5e9351", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"", Pod:"coredns-787d4945fb-hwl75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib16ff9cdb94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:53.111638 env[1440]: 2024-02-09 18:34:53.082 [INFO][4570] k8s.go 386: Calico CNI using IPs: [192.168.32.3/32] ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:53.111638 env[1440]: 2024-02-09 18:34:53.082 [INFO][4570] dataplane_linux.go 68: Setting the host side veth name to calib16ff9cdb94 ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:53.111638 env[1440]: 2024-02-09 18:34:53.095 [INFO][4570] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:53.111638 env[1440]: 2024-02-09 18:34:53.096 [INFO][4570] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bf591268-bd51-4cdc-be95-b7338d5e9351", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f", Pod:"coredns-787d4945fb-hwl75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib16ff9cdb94", MAC:"3a:5a:58:b2:6b:28", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:53.111638 env[1440]: 2024-02-09 18:34:53.108 [INFO][4570] k8s.go 491: Wrote updated endpoint to datastore ContainerID="f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f" Namespace="kube-system" Pod="coredns-787d4945fb-hwl75" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:34:53.122000 audit[4605]: NETFILTER_CFG table=filter:123 family=2 entries=44 op=nft_register_chain pid=4605 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:53.130340 env[1440]: time="2024-02-09T18:34:53.130281125Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:53.130488 env[1440]: time="2024-02-09T18:34:53.130467845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:53.130563 env[1440]: time="2024-02-09T18:34:53.130543125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:53.130783 env[1440]: time="2024-02-09T18:34:53.130757086Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f pid=4613 runtime=io.containerd.runc.v2 Feb 9 18:34:53.122000 audit[4605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22284 a0=3 a1=ffffcf3b9c40 a2=0 a3=ffff9131ffa8 items=0 ppid=4133 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:53.174739 kernel: audit: type=1325 audit(1707503693.122:316): table=filter:123 family=2 entries=44 op=nft_register_chain pid=4605 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:53.175193 kernel: audit: type=1300 audit(1707503693.122:316): arch=c00000b7 syscall=211 success=yes exit=22284 a0=3 a1=ffffcf3b9c40 a2=0 a3=ffff9131ffa8 items=0 ppid=4133 pid=4605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:53.122000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:53.200394 kernel: audit: type=1327 audit(1707503693.122:316): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:53.233478 env[1440]: time="2024-02-09T18:34:53.233429080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hwl75,Uid:bf591268-bd51-4cdc-be95-b7338d5e9351,Namespace:kube-system,Attempt:1,} returns sandbox id \"f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f\"" Feb 9 18:34:53.236808 env[1440]: time="2024-02-09T18:34:53.236769402Z" level=info msg="CreateContainer within sandbox \"f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:34:53.330229 env[1440]: time="2024-02-09T18:34:53.330186510Z" level=info msg="CreateContainer within sandbox \"f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a657fc424c821acb43e91c4f92484f3459f66683b8a94104b2229daacc58bea\"" Feb 9 18:34:53.331054 env[1440]: time="2024-02-09T18:34:53.331025110Z" level=info msg="StartContainer for \"3a657fc424c821acb43e91c4f92484f3459f66683b8a94104b2229daacc58bea\"" Feb 9 18:34:53.387643 env[1440]: time="2024-02-09T18:34:53.387545951Z" level=info msg="StartContainer for \"3a657fc424c821acb43e91c4f92484f3459f66683b8a94104b2229daacc58bea\" returns successfully" Feb 9 18:34:53.587165 systemd-networkd[1604]: cali6eb5fd23ccb: Gained IPv6LL Feb 9 18:34:53.779298 systemd-networkd[1604]: cali571278ad65b: Gained IPv6LL Feb 9 18:34:53.815692 env[1440]: time="2024-02-09T18:34:53.815472540Z" level=info msg="StopPodSandbox for \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\"" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.874 [INFO][4699] k8s.go 578: Cleaning up netns ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.909 [INFO][4699] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" iface="eth0" netns="/var/run/netns/cni-daa57ece-ab46-4691-2eed-4c52a3338255" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.910 [INFO][4699] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" iface="eth0" netns="/var/run/netns/cni-daa57ece-ab46-4691-2eed-4c52a3338255" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.910 [INFO][4699] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" iface="eth0" netns="/var/run/netns/cni-daa57ece-ab46-4691-2eed-4c52a3338255" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.910 [INFO][4699] k8s.go 585: Releasing IP address(es) ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.910 [INFO][4699] utils.go 188: Calico CNI releasing IP address ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.956 [INFO][4705] ipam_plugin.go 415: Releasing address using handleID ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.960 [INFO][4705] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.960 [INFO][4705] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.977 [WARNING][4705] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.977 [INFO][4705] ipam_plugin.go 443: Releasing address using workloadID ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.984 [INFO][4705] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:53.991058 env[1440]: 2024-02-09 18:34:53.985 [INFO][4699] k8s.go 591: Teardown processing complete. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:34:53.997432 env[1440]: time="2024-02-09T18:34:53.991223427Z" level=info msg="TearDown network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\" successfully" Feb 9 18:34:53.997432 env[1440]: time="2024-02-09T18:34:53.991257387Z" level=info msg="StopPodSandbox for \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\" returns successfully" Feb 9 18:34:53.997432 env[1440]: time="2024-02-09T18:34:53.996500951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8fllh,Uid:9c5b7cae-5788-46fc-a067-48ba7ee335bb,Namespace:kube-system,Attempt:1,}" Feb 9 18:34:53.995400 systemd[1]: run-netns-cni\x2ddaa57ece\x2dab46\x2d4691\x2d2eed\x2d4c52a3338255.mount: Deactivated successfully. Feb 9 18:34:54.052829 kubelet[2612]: I0209 18:34:54.051918 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-hwl75" podStartSLOduration=39.051881551 pod.CreationTimestamp="2024-02-09 18:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:54.049937869 +0000 UTC m=+52.501534805" watchObservedRunningTime="2024-02-09 18:34:54.051881551 +0000 UTC m=+52.503478487" Feb 9 18:34:54.142000 audit[4748]: NETFILTER_CFG table=filter:124 family=2 entries=12 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:54.142000 audit[4748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffc01f1100 a2=0 a3=ffffa02146c0 items=0 ppid=2815 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:54.194432 kernel: audit: type=1325 audit(1707503694.142:317): table=filter:124 family=2 entries=12 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:54.194564 kernel: audit: type=1300 audit(1707503694.142:317): arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffc01f1100 a2=0 a3=ffffa02146c0 items=0 ppid=2815 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:54.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:54.212047 kernel: audit: type=1327 audit(1707503694.142:317): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:54.212745 env[1440]: time="2024-02-09T18:34:54.212695426Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:54.143000 audit[4748]: NETFILTER_CFG table=nat:125 family=2 entries=30 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:54.229271 kernel: audit: type=1325 audit(1707503694.143:318): table=nat:125 family=2 entries=30 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:54.143000 audit[4748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffc01f1100 a2=0 a3=ffffa02146c0 items=0 ppid=2815 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:54.143000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:54.236768 env[1440]: time="2024-02-09T18:34:54.236723643Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:54.245097 env[1440]: time="2024-02-09T18:34:54.245031089Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:54.260454 env[1440]: time="2024-02-09T18:34:54.260417460Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:54.260854 env[1440]: time="2024-02-09T18:34:54.260828980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 9 18:34:54.262674 env[1440]: time="2024-02-09T18:34:54.262649101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 9 18:34:54.266706 env[1440]: time="2024-02-09T18:34:54.266663944Z" level=info msg="CreateContainer within sandbox \"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 9 18:34:54.297000 audit[4782]: NETFILTER_CFG table=filter:126 family=2 entries=9 op=nft_register_rule pid=4782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:54.297000 audit[4782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffca467bb0 a2=0 a3=ffffa9ceb6c0 items=0 ppid=2815 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:54.297000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:54.298000 audit[4782]: NETFILTER_CFG table=nat:127 family=2 entries=51 op=nft_register_chain pid=4782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:54.298000 audit[4782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffca467bb0 a2=0 a3=ffffa9ceb6c0 items=0 ppid=2815 pid=4782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:54.298000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:54.309383 systemd-networkd[1604]: calia7199eaee57: Link UP Feb 9 18:34:54.325186 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia7199eaee57: link becomes ready Feb 9 18:34:54.325031 systemd-networkd[1604]: calia7199eaee57: Gained carrier Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.213 [INFO][4720] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0 coredns-787d4945fb- kube-system 9c5b7cae-5788-46fc-a067-48ba7ee335bb 790 0 2024-02-09 18:34:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.2-a-e8e52debc2 coredns-787d4945fb-8fllh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia7199eaee57 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.214 [INFO][4720] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.257 [INFO][4754] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" HandleID="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.275 [INFO][4754] ipam_plugin.go 268: Auto assigning IP ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" HandleID="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b47c0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.2-a-e8e52debc2", "pod":"coredns-787d4945fb-8fllh", "timestamp":"2024-02-09 18:34:54.257218977 +0000 UTC"}, Hostname:"ci-3510.3.2-a-e8e52debc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.275 [INFO][4754] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.275 [INFO][4754] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.275 [INFO][4754] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-e8e52debc2' Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.277 [INFO][4754] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.280 [INFO][4754] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.283 [INFO][4754] ipam.go 489: Trying affinity for 192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.285 [INFO][4754] ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.288 [INFO][4754] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.288 [INFO][4754] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.291 [INFO][4754] ipam.go 1682: Creating new handle: k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.294 [INFO][4754] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.305 [INFO][4754] ipam.go 1216: Successfully claimed IPs: [192.168.32.4/26] block=192.168.32.0/26 handle="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.305 [INFO][4754] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.4/26] handle="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.305 [INFO][4754] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:34:54.333140 env[1440]: 2024-02-09 18:34:54.305 [INFO][4754] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.32.4/26] IPv6=[] ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" HandleID="k8s-pod-network.e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:54.333777 env[1440]: 2024-02-09 18:34:54.307 [INFO][4720] k8s.go 385: Populated endpoint ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"9c5b7cae-5788-46fc-a067-48ba7ee335bb", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"", Pod:"coredns-787d4945fb-8fllh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7199eaee57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:54.333777 env[1440]: 2024-02-09 18:34:54.307 [INFO][4720] k8s.go 386: Calico CNI using IPs: [192.168.32.4/32] ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:54.333777 env[1440]: 2024-02-09 18:34:54.307 [INFO][4720] dataplane_linux.go 68: Setting the host side veth name to calia7199eaee57 ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:54.333777 env[1440]: 2024-02-09 18:34:54.309 [INFO][4720] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:54.333777 env[1440]: 2024-02-09 18:34:54.311 [INFO][4720] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"9c5b7cae-5788-46fc-a067-48ba7ee335bb", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd", Pod:"coredns-787d4945fb-8fllh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7199eaee57", MAC:"46:86:46:f1:2b:9a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:34:54.333777 env[1440]: 2024-02-09 18:34:54.331 [INFO][4720] k8s.go 491: Wrote updated endpoint to datastore ContainerID="e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd" Namespace="kube-system" Pod="coredns-787d4945fb-8fllh" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:34:54.351000 audit[4796]: NETFILTER_CFG table=filter:128 family=2 entries=38 op=nft_register_chain pid=4796 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:34:54.351000 audit[4796]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19088 a0=3 a1=ffffd48edbc0 a2=0 a3=ffff85d12fa8 items=0 ppid=4133 pid=4796 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:54.351000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:34:54.374341 env[1440]: time="2024-02-09T18:34:54.374296141Z" level=info msg="CreateContainer within sandbox \"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fb2c6b2cc30c58131c9ef0cf11496ee6cd74d218c2db077de201a4d902d2c929\"" Feb 9 18:34:54.375215 env[1440]: time="2024-02-09T18:34:54.375190422Z" level=info msg="StartContainer for \"fb2c6b2cc30c58131c9ef0cf11496ee6cd74d218c2db077de201a4d902d2c929\"" Feb 9 18:34:54.400185 env[1440]: time="2024-02-09T18:34:54.400112079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:34:54.400185 env[1440]: time="2024-02-09T18:34:54.400156159Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:34:54.400371 env[1440]: time="2024-02-09T18:34:54.400166079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:34:54.400371 env[1440]: time="2024-02-09T18:34:54.400322159Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd pid=4823 runtime=io.containerd.runc.v2 Feb 9 18:34:54.477561 env[1440]: time="2024-02-09T18:34:54.477514455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-8fllh,Uid:9c5b7cae-5788-46fc-a067-48ba7ee335bb,Namespace:kube-system,Attempt:1,} returns sandbox id \"e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd\"" Feb 9 18:34:54.485017 env[1440]: time="2024-02-09T18:34:54.484947780Z" level=info msg="CreateContainer within sandbox \"e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 18:34:54.485878 env[1440]: time="2024-02-09T18:34:54.485851101Z" level=info msg="StartContainer for \"fb2c6b2cc30c58131c9ef0cf11496ee6cd74d218c2db077de201a4d902d2c929\" returns successfully" Feb 9 18:34:54.560605 env[1440]: time="2024-02-09T18:34:54.558081712Z" level=info msg="CreateContainer within sandbox \"e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a35316a20f8c5bd2f7be923e022e43aef62b5d22959bf7aa00b13a3fe8485d3\"" Feb 9 18:34:54.560605 env[1440]: time="2024-02-09T18:34:54.559829953Z" level=info msg="StartContainer for \"5a35316a20f8c5bd2f7be923e022e43aef62b5d22959bf7aa00b13a3fe8485d3\"" Feb 9 18:34:54.612944 env[1440]: time="2024-02-09T18:34:54.612888431Z" level=info msg="StartContainer for \"5a35316a20f8c5bd2f7be923e022e43aef62b5d22959bf7aa00b13a3fe8485d3\" returns successfully" Feb 9 18:34:54.937453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4236060780.mount: Deactivated successfully. Feb 9 18:34:55.056901 kubelet[2612]: I0209 18:34:55.056871 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-8fllh" podStartSLOduration=40.056827027 pod.CreationTimestamp="2024-02-09 18:34:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:55.054770266 +0000 UTC m=+53.506367162" watchObservedRunningTime="2024-02-09 18:34:55.056827027 +0000 UTC m=+53.508423963" Feb 9 18:34:55.124110 systemd-networkd[1604]: calib16ff9cdb94: Gained IPv6LL Feb 9 18:34:55.155000 audit[4935]: NETFILTER_CFG table=filter:129 family=2 entries=6 op=nft_register_rule pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:55.155000 audit[4935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffca28a630 a2=0 a3=ffffae44c6c0 items=0 ppid=2815 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:55.155000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:55.157000 audit[4935]: NETFILTER_CFG table=nat:130 family=2 entries=60 op=nft_register_rule pid=4935 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:55.157000 audit[4935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffca28a630 a2=0 a3=ffffae44c6c0 items=0 ppid=2815 pid=4935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:55.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:55.891173 systemd-networkd[1604]: calia7199eaee57: Gained IPv6LL Feb 9 18:34:56.202000 audit[4961]: NETFILTER_CFG table=filter:131 family=2 entries=6 op=nft_register_rule pid=4961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:56.202000 audit[4961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd623a1a0 a2=0 a3=ffffa16286c0 items=0 ppid=2815 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.202000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:56.221000 audit[4961]: NETFILTER_CFG table=nat:132 family=2 entries=72 op=nft_register_chain pid=4961 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:56.221000 audit[4961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd623a1a0 a2=0 a3=ffffa16286c0 items=0 ppid=2815 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:56.221000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:56.565446 env[1440]: time="2024-02-09T18:34:56.565321886Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:56.579047 env[1440]: time="2024-02-09T18:34:56.579007095Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:56.584830 env[1440]: time="2024-02-09T18:34:56.584792699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:56.596128 env[1440]: time="2024-02-09T18:34:56.596086987Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:56.596704 env[1440]: time="2024-02-09T18:34:56.596666988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 9 18:34:56.598986 env[1440]: time="2024-02-09T18:34:56.598466349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 9 18:34:56.612205 env[1440]: time="2024-02-09T18:34:56.612170078Z" level=info msg="CreateContainer within sandbox \"47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 9 18:34:56.663340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3803204059.mount: Deactivated successfully. Feb 9 18:34:56.692324 env[1440]: time="2024-02-09T18:34:56.692265174Z" level=info msg="CreateContainer within sandbox \"47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08\"" Feb 9 18:34:56.692943 env[1440]: time="2024-02-09T18:34:56.692915055Z" level=info msg="StartContainer for \"9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08\"" Feb 9 18:34:56.813174 env[1440]: time="2024-02-09T18:34:56.813130978Z" level=info msg="StartContainer for \"9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08\" returns successfully" Feb 9 18:34:57.110973 kubelet[2612]: I0209 18:34:57.110463 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6496d9846c-w2h2q" podStartSLOduration=-9.223372003744352e+09 pod.CreationTimestamp="2024-02-09 18:34:24 +0000 UTC" firstStartedPulling="2024-02-09 18:34:52.563470232 +0000 UTC m=+51.015067168" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:57.078112922 +0000 UTC m=+55.529709858" watchObservedRunningTime="2024-02-09 18:34:57.110424184 +0000 UTC m=+55.562021160" Feb 9 18:34:58.390863 env[1440]: time="2024-02-09T18:34:58.390801422Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:58.404191 env[1440]: time="2024-02-09T18:34:58.404155751Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:58.412144 env[1440]: time="2024-02-09T18:34:58.412097597Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:58.420052 env[1440]: time="2024-02-09T18:34:58.419999082Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:34:58.420551 env[1440]: time="2024-02-09T18:34:58.420520003Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 9 18:34:58.423885 env[1440]: time="2024-02-09T18:34:58.423339444Z" level=info msg="CreateContainer within sandbox \"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 9 18:34:58.466888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount279180315.mount: Deactivated successfully. Feb 9 18:34:58.496372 env[1440]: time="2024-02-09T18:34:58.496320574Z" level=info msg="CreateContainer within sandbox \"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e11eb8aa3560693e6195fa5e0580e0d41828059dffd3944a2a531373d0a1d0c9\"" Feb 9 18:34:58.497199 env[1440]: time="2024-02-09T18:34:58.497172935Z" level=info msg="StartContainer for \"e11eb8aa3560693e6195fa5e0580e0d41828059dffd3944a2a531373d0a1d0c9\"" Feb 9 18:34:58.577211 env[1440]: time="2024-02-09T18:34:58.577099389Z" level=info msg="StartContainer for \"e11eb8aa3560693e6195fa5e0580e0d41828059dffd3944a2a531373d0a1d0c9\" returns successfully" Feb 9 18:34:58.602534 systemd[1]: run-containerd-runc-k8s.io-e11eb8aa3560693e6195fa5e0580e0d41828059dffd3944a2a531373d0a1d0c9-runc.2tlSZY.mount: Deactivated successfully. Feb 9 18:34:58.645633 kubelet[2612]: I0209 18:34:58.645518 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:58.678381 kubelet[2612]: I0209 18:34:58.678330 2612 topology_manager.go:210] "Topology Admit Handler" Feb 9 18:34:58.705282 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:34:58.705429 kernel: audit: type=1325 audit(1707503698.690:326): table=filter:133 family=2 entries=7 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.690000 audit[5087]: NETFILTER_CFG table=filter:133 family=2 entries=7 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.690000 audit[5087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff86f4e80 a2=0 a3=ffff890e06c0 items=0 ppid=2815 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:58.722630 kubelet[2612]: I0209 18:34:58.722602 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtq94\" (UniqueName: \"kubernetes.io/projected/1f7b26ad-39b1-4d8f-a852-766f6b8b4da3-kube-api-access-xtq94\") pod \"calico-apiserver-db6d8b798-gcspv\" (UID: \"1f7b26ad-39b1-4d8f-a852-766f6b8b4da3\") " pod="calico-apiserver/calico-apiserver-db6d8b798-gcspv" Feb 9 18:34:58.722826 kubelet[2612]: I0209 18:34:58.722815 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsv9\" (UniqueName: \"kubernetes.io/projected/1cc797a5-6934-4605-a7f1-bb8f5185945b-kube-api-access-tnsv9\") pod \"calico-apiserver-db6d8b798-xmvt5\" (UID: \"1cc797a5-6934-4605-a7f1-bb8f5185945b\") " pod="calico-apiserver/calico-apiserver-db6d8b798-xmvt5" Feb 9 18:34:58.722940 kubelet[2612]: I0209 18:34:58.722930 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1cc797a5-6934-4605-a7f1-bb8f5185945b-calico-apiserver-certs\") pod \"calico-apiserver-db6d8b798-xmvt5\" (UID: \"1cc797a5-6934-4605-a7f1-bb8f5185945b\") " pod="calico-apiserver/calico-apiserver-db6d8b798-xmvt5" Feb 9 18:34:58.723057 kubelet[2612]: I0209 18:34:58.723047 2612 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1f7b26ad-39b1-4d8f-a852-766f6b8b4da3-calico-apiserver-certs\") pod \"calico-apiserver-db6d8b798-gcspv\" (UID: \"1f7b26ad-39b1-4d8f-a852-766f6b8b4da3\") " pod="calico-apiserver/calico-apiserver-db6d8b798-gcspv" Feb 9 18:34:58.745152 kernel: audit: type=1300 audit(1707503698.690:326): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=fffff86f4e80 a2=0 a3=ffff890e06c0 items=0 ppid=2815 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:58.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:58.764555 kernel: audit: type=1327 audit(1707503698.690:326): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:58.690000 audit[5087]: NETFILTER_CFG table=nat:134 family=2 entries=78 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.781501 kernel: audit: type=1325 audit(1707503698.690:327): table=nat:134 family=2 entries=78 op=nft_register_rule pid=5087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.690000 audit[5087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff86f4e80 a2=0 a3=ffff890e06c0 items=0 ppid=2815 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:58.813431 kernel: audit: type=1300 audit(1707503698.690:327): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff86f4e80 a2=0 a3=ffff890e06c0 items=0 ppid=2815 pid=5087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:58.690000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:58.824025 kubelet[2612]: E0209 18:34:58.823999 2612 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 18:34:58.824246 kubelet[2612]: E0209 18:34:58.824223 2612 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc797a5-6934-4605-a7f1-bb8f5185945b-calico-apiserver-certs podName:1cc797a5-6934-4605-a7f1-bb8f5185945b nodeName:}" failed. No retries permitted until 2024-02-09 18:34:59.324202597 +0000 UTC m=+57.775799533 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1cc797a5-6934-4605-a7f1-bb8f5185945b-calico-apiserver-certs") pod "calico-apiserver-db6d8b798-xmvt5" (UID: "1cc797a5-6934-4605-a7f1-bb8f5185945b") : secret "calico-apiserver-certs" not found Feb 9 18:34:58.824596 kubelet[2612]: E0209 18:34:58.824572 2612 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 18:34:58.824726 kubelet[2612]: E0209 18:34:58.824714 2612 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7b26ad-39b1-4d8f-a852-766f6b8b4da3-calico-apiserver-certs podName:1f7b26ad-39b1-4d8f-a852-766f6b8b4da3 nodeName:}" failed. No retries permitted until 2024-02-09 18:34:59.324701877 +0000 UTC m=+57.776298813 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1f7b26ad-39b1-4d8f-a852-766f6b8b4da3-calico-apiserver-certs") pod "calico-apiserver-db6d8b798-gcspv" (UID: "1f7b26ad-39b1-4d8f-a852-766f6b8b4da3") : secret "calico-apiserver-certs" not found Feb 9 18:34:58.829438 kernel: audit: type=1327 audit(1707503698.690:327): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:58.866000 audit[5115]: NETFILTER_CFG table=filter:135 family=2 entries=8 op=nft_register_rule pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.866000 audit[5115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd59870c0 a2=0 a3=ffff86b3e6c0 items=0 ppid=2815 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:58.918380 kernel: audit: type=1325 audit(1707503698.866:328): table=filter:135 family=2 entries=8 op=nft_register_rule pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.918525 kernel: audit: type=1300 audit(1707503698.866:328): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffd59870c0 a2=0 a3=ffff86b3e6c0 items=0 ppid=2815 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:58.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:58.939478 kernel: audit: type=1327 audit(1707503698.866:328): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:58.866000 audit[5115]: NETFILTER_CFG table=nat:136 family=2 entries=78 op=nft_register_rule pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.957294 kernel: audit: type=1325 audit(1707503698.866:329): table=nat:136 family=2 entries=78 op=nft_register_rule pid=5115 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:34:58.866000 audit[5115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd59870c0 a2=0 a3=ffff86b3e6c0 items=0 ppid=2815 pid=5115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:34:58.866000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:34:58.999703 kubelet[2612]: I0209 18:34:58.999678 2612 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 9 18:34:59.000214 kubelet[2612]: I0209 18:34:59.000194 2612 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 9 18:34:59.326791 kubelet[2612]: E0209 18:34:59.326690 2612 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 18:34:59.327080 kubelet[2612]: E0209 18:34:59.327053 2612 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1f7b26ad-39b1-4d8f-a852-766f6b8b4da3-calico-apiserver-certs podName:1f7b26ad-39b1-4d8f-a852-766f6b8b4da3 nodeName:}" failed. No retries permitted until 2024-02-09 18:35:00.327031776 +0000 UTC m=+58.778628712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1f7b26ad-39b1-4d8f-a852-766f6b8b4da3-calico-apiserver-certs") pod "calico-apiserver-db6d8b798-gcspv" (UID: "1f7b26ad-39b1-4d8f-a852-766f6b8b4da3") : secret "calico-apiserver-certs" not found Feb 9 18:34:59.327588 kubelet[2612]: E0209 18:34:59.327571 2612 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 9 18:34:59.327745 kubelet[2612]: E0209 18:34:59.327734 2612 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1cc797a5-6934-4605-a7f1-bb8f5185945b-calico-apiserver-certs podName:1cc797a5-6934-4605-a7f1-bb8f5185945b nodeName:}" failed. No retries permitted until 2024-02-09 18:35:00.327721137 +0000 UTC m=+58.779318073 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1cc797a5-6934-4605-a7f1-bb8f5185945b-calico-apiserver-certs") pod "calico-apiserver-db6d8b798-xmvt5" (UID: "1cc797a5-6934-4605-a7f1-bb8f5185945b") : secret "calico-apiserver-certs" not found Feb 9 18:35:00.451765 env[1440]: time="2024-02-09T18:35:00.451456449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db6d8b798-gcspv,Uid:1f7b26ad-39b1-4d8f-a852-766f6b8b4da3,Namespace:calico-apiserver,Attempt:0,}" Feb 9 18:35:00.482743 env[1440]: time="2024-02-09T18:35:00.482695510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db6d8b798-xmvt5,Uid:1cc797a5-6934-4605-a7f1-bb8f5185945b,Namespace:calico-apiserver,Attempt:0,}" Feb 9 18:35:00.707935 systemd-networkd[1604]: calid0049a96aa7: Link UP Feb 9 18:35:00.721440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 18:35:00.721554 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid0049a96aa7: link becomes ready Feb 9 18:35:00.727548 systemd-networkd[1604]: calid0049a96aa7: Gained carrier Feb 9 18:35:00.738921 kubelet[2612]: I0209 18:35:00.736662 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7hwhv" podStartSLOduration=-9.223371999118172e+09 pod.CreationTimestamp="2024-02-09 18:34:23 +0000 UTC" firstStartedPulling="2024-02-09 18:34:52.507468431 +0000 UTC m=+50.959065367" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:34:59.454174022 +0000 UTC m=+57.905770958" watchObservedRunningTime="2024-02-09 18:35:00.736604039 +0000 UTC m=+59.188200975" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.598 [INFO][5121] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0 calico-apiserver-db6d8b798- calico-apiserver 1f7b26ad-39b1-4d8f-a852-766f6b8b4da3 875 0 2024-02-09 18:34:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db6d8b798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-e8e52debc2 calico-apiserver-db6d8b798-gcspv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid0049a96aa7 [] []}} ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.599 [INFO][5121] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.649 [INFO][5133] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" HandleID="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.664 [INFO][5133] ipam_plugin.go 268: Auto assigning IP ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" HandleID="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bd190), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-e8e52debc2", "pod":"calico-apiserver-db6d8b798-gcspv", "timestamp":"2024-02-09 18:35:00.649520861 +0000 UTC"}, Hostname:"ci-3510.3.2-a-e8e52debc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.664 [INFO][5133] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.665 [INFO][5133] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.665 [INFO][5133] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-e8e52debc2' Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.666 [INFO][5133] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.671 [INFO][5133] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.675 [INFO][5133] ipam.go 489: Trying affinity for 192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.678 [INFO][5133] ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.681 [INFO][5133] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.681 [INFO][5133] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.682 [INFO][5133] ipam.go 1682: Creating new handle: k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43 Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.685 [INFO][5133] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.695 [INFO][5133] ipam.go 1216: Successfully claimed IPs: [192.168.32.5/26] block=192.168.32.0/26 handle="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.695 [INFO][5133] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.5/26] handle="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.695 [INFO][5133] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:00.749319 env[1440]: 2024-02-09 18:35:00.696 [INFO][5133] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.32.5/26] IPv6=[] ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" HandleID="k8s-pod-network.39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" Feb 9 18:35:00.749874 env[1440]: 2024-02-09 18:35:00.700 [INFO][5121] k8s.go 385: Populated endpoint ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0", GenerateName:"calico-apiserver-db6d8b798-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f7b26ad-39b1-4d8f-a852-766f6b8b4da3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db6d8b798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"", Pod:"calico-apiserver-db6d8b798-gcspv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0049a96aa7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:00.749874 env[1440]: 2024-02-09 18:35:00.701 [INFO][5121] k8s.go 386: Calico CNI using IPs: [192.168.32.5/32] ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" Feb 9 18:35:00.749874 env[1440]: 2024-02-09 18:35:00.701 [INFO][5121] dataplane_linux.go 68: Setting the host side veth name to calid0049a96aa7 ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" Feb 9 18:35:00.749874 env[1440]: 2024-02-09 18:35:00.726 [INFO][5121] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" Feb 9 18:35:00.749874 env[1440]: 2024-02-09 18:35:00.726 [INFO][5121] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0", GenerateName:"calico-apiserver-db6d8b798-", Namespace:"calico-apiserver", SelfLink:"", UID:"1f7b26ad-39b1-4d8f-a852-766f6b8b4da3", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db6d8b798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43", Pod:"calico-apiserver-db6d8b798-gcspv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid0049a96aa7", MAC:"0a:2a:29:3a:d5:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:00.749874 env[1440]: 2024-02-09 18:35:00.747 [INFO][5121] k8s.go 491: Wrote updated endpoint to datastore ContainerID="39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-gcspv" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--gcspv-eth0" Feb 9 18:35:00.774000 audit[5182]: NETFILTER_CFG table=filter:137 family=2 entries=59 op=nft_register_chain pid=5182 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:35:00.774000 audit[5182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29292 a0=3 a1=ffffef725260 a2=0 a3=ffff93ce8fa8 items=0 ppid=4133 pid=5182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:00.774000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:35:00.782715 env[1440]: time="2024-02-09T18:35:00.782606109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:00.782715 env[1440]: time="2024-02-09T18:35:00.782664869Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:00.782715 env[1440]: time="2024-02-09T18:35:00.782675509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:00.788710 env[1440]: time="2024-02-09T18:35:00.788647593Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43 pid=5186 runtime=io.containerd.runc.v2 Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.700 [INFO][5138] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0 calico-apiserver-db6d8b798- calico-apiserver 1cc797a5-6934-4605-a7f1-bb8f5185945b 878 0 2024-02-09 18:34:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:db6d8b798 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.2-a-e8e52debc2 calico-apiserver-db6d8b798-xmvt5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali39d9fd5f0b8 [] []}} ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.701 [INFO][5138] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.794 [INFO][5158] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" HandleID="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.811 [INFO][5158] ipam_plugin.go 268: Auto assigning IP ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" HandleID="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b47c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.2-a-e8e52debc2", "pod":"calico-apiserver-db6d8b798-xmvt5", "timestamp":"2024-02-09 18:35:00.794541957 +0000 UTC"}, Hostname:"ci-3510.3.2-a-e8e52debc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.811 [INFO][5158] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.811 [INFO][5158] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.811 [INFO][5158] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.2-a-e8e52debc2' Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.813 [INFO][5158] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.816 [INFO][5158] ipam.go 372: Looking up existing affinities for host host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.822 [INFO][5158] ipam.go 489: Trying affinity for 192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.824 [INFO][5158] ipam.go 155: Attempting to load block cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.827 [INFO][5158] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.827 [INFO][5158] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.829 [INFO][5158] ipam.go 1682: Creating new handle: k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.836 [INFO][5158] ipam.go 1203: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.869 [INFO][5158] ipam.go 1216: Successfully claimed IPs: [192.168.32.6/26] block=192.168.32.0/26 handle="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.869 [INFO][5158] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.32.6/26] handle="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" host="ci-3510.3.2-a-e8e52debc2" Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.869 [INFO][5158] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:00.901318 env[1440]: 2024-02-09 18:35:00.869 [INFO][5158] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.32.6/26] IPv6=[] ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" HandleID="k8s-pod-network.cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" Feb 9 18:35:00.901982 env[1440]: 2024-02-09 18:35:00.871 [INFO][5138] k8s.go 385: Populated endpoint ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0", GenerateName:"calico-apiserver-db6d8b798-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cc797a5-6934-4605-a7f1-bb8f5185945b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db6d8b798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"", Pod:"calico-apiserver-db6d8b798-xmvt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39d9fd5f0b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:00.901982 env[1440]: 2024-02-09 18:35:00.871 [INFO][5138] k8s.go 386: Calico CNI using IPs: [192.168.32.6/32] ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" Feb 9 18:35:00.901982 env[1440]: 2024-02-09 18:35:00.871 [INFO][5138] dataplane_linux.go 68: Setting the host side veth name to cali39d9fd5f0b8 ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" Feb 9 18:35:00.901982 env[1440]: 2024-02-09 18:35:00.883 [INFO][5138] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" Feb 9 18:35:00.901982 env[1440]: 2024-02-09 18:35:00.883 [INFO][5138] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0", GenerateName:"calico-apiserver-db6d8b798-", Namespace:"calico-apiserver", SelfLink:"", UID:"1cc797a5-6934-4605-a7f1-bb8f5185945b", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"db6d8b798", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b", Pod:"calico-apiserver-db6d8b798-xmvt5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali39d9fd5f0b8", MAC:"e2:39:6e:b4:ae:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:00.901982 env[1440]: 2024-02-09 18:35:00.900 [INFO][5138] k8s.go 491: Wrote updated endpoint to datastore ContainerID="cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b" Namespace="calico-apiserver" Pod="calico-apiserver-db6d8b798-xmvt5" WorkloadEndpoint="ci--3510.3.2--a--e8e52debc2-k8s-calico--apiserver--db6d8b798--xmvt5-eth0" Feb 9 18:35:00.924841 systemd-networkd[1604]: cali39d9fd5f0b8: Link UP Feb 9 18:35:00.925003 systemd-networkd[1604]: cali39d9fd5f0b8: Gained carrier Feb 9 18:35:00.933998 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali39d9fd5f0b8: link becomes ready Feb 9 18:35:00.966000 audit[5225]: NETFILTER_CFG table=filter:138 family=2 entries=50 op=nft_register_chain pid=5225 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 9 18:35:00.966000 audit[5225]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24496 a0=3 a1=ffffc891b080 a2=0 a3=ffff9db5dfa8 items=0 ppid=4133 pid=5225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:00.966000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 9 18:35:00.984824 env[1440]: time="2024-02-09T18:35:00.984775444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db6d8b798-gcspv,Uid:1f7b26ad-39b1-4d8f-a852-766f6b8b4da3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43\"" Feb 9 18:35:00.987994 env[1440]: time="2024-02-09T18:35:00.987911726Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 18:35:01.120619 env[1440]: time="2024-02-09T18:35:01.120535333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 18:35:01.120619 env[1440]: time="2024-02-09T18:35:01.120575813Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 18:35:01.120619 env[1440]: time="2024-02-09T18:35:01.120586973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 18:35:01.122456 env[1440]: time="2024-02-09T18:35:01.121289533Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b pid=5242 runtime=io.containerd.runc.v2 Feb 9 18:35:01.167325 env[1440]: time="2024-02-09T18:35:01.167285404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-db6d8b798-xmvt5,Uid:1cc797a5-6934-4605-a7f1-bb8f5185945b,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b\"" Feb 9 18:35:01.745357 env[1440]: time="2024-02-09T18:35:01.745319704Z" level=info msg="StopPodSandbox for \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\"" Feb 9 18:35:01.745844 env[1440]: time="2024-02-09T18:35:01.745785384Z" level=info msg="TearDown network for sandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" successfully" Feb 9 18:35:01.745933 env[1440]: time="2024-02-09T18:35:01.745917024Z" level=info msg="StopPodSandbox for \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" returns successfully" Feb 9 18:35:01.746442 env[1440]: time="2024-02-09T18:35:01.746414304Z" level=info msg="RemovePodSandbox for \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\"" Feb 9 18:35:01.746517 env[1440]: time="2024-02-09T18:35:01.746446824Z" level=info msg="Forcibly stopping sandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\"" Feb 9 18:35:01.746546 env[1440]: time="2024-02-09T18:35:01.746514385Z" level=info msg="TearDown network for sandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" successfully" Feb 9 18:35:01.775419 env[1440]: time="2024-02-09T18:35:01.775370083Z" level=info msg="RemovePodSandbox \"583314c3276978b6bc2983d47fcbabb60a8978c842f32c9f0967e73a108fe6cc\" returns successfully" Feb 9 18:35:01.775941 env[1440]: time="2024-02-09T18:35:01.775918644Z" level=info msg="StopPodSandbox for \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\"" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.814 [WARNING][5287] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bf591268-bd51-4cdc-be95-b7338d5e9351", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f", Pod:"coredns-787d4945fb-hwl75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib16ff9cdb94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.815 [INFO][5287] k8s.go 578: Cleaning up netns ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.815 [INFO][5287] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" iface="eth0" netns="" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.815 [INFO][5287] k8s.go 585: Releasing IP address(es) ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.815 [INFO][5287] utils.go 188: Calico CNI releasing IP address ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.838 [INFO][5294] ipam_plugin.go 415: Releasing address using handleID ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.838 [INFO][5294] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.838 [INFO][5294] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.847 [WARNING][5294] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.847 [INFO][5294] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.849 [INFO][5294] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:01.851731 env[1440]: 2024-02-09 18:35:01.850 [INFO][5287] k8s.go 591: Teardown processing complete. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.852359 env[1440]: time="2024-02-09T18:35:01.852325374Z" level=info msg="TearDown network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\" successfully" Feb 9 18:35:01.852425 env[1440]: time="2024-02-09T18:35:01.852410174Z" level=info msg="StopPodSandbox for \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\" returns successfully" Feb 9 18:35:01.853006 env[1440]: time="2024-02-09T18:35:01.852933854Z" level=info msg="RemovePodSandbox for \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\"" Feb 9 18:35:01.853102 env[1440]: time="2024-02-09T18:35:01.853015615Z" level=info msg="Forcibly stopping sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\"" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.899 [WARNING][5313] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"bf591268-bd51-4cdc-be95-b7338d5e9351", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"f3363c73cf4046dc59fd010f8fd2aa4cb3ddd4825e6db0e1a4104ffe6aef242f", Pod:"coredns-787d4945fb-hwl75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib16ff9cdb94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.900 [INFO][5313] k8s.go 578: Cleaning up netns ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.900 [INFO][5313] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" iface="eth0" netns="" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.900 [INFO][5313] k8s.go 585: Releasing IP address(es) ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.900 [INFO][5313] utils.go 188: Calico CNI releasing IP address ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.921 [INFO][5319] ipam_plugin.go 415: Releasing address using handleID ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.921 [INFO][5319] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.921 [INFO][5319] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.930 [WARNING][5319] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.930 [INFO][5319] ipam_plugin.go 443: Releasing address using workloadID ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" HandleID="k8s-pod-network.1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--hwl75-eth0" Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.931 [INFO][5319] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:01.934475 env[1440]: 2024-02-09 18:35:01.933 [INFO][5313] k8s.go 591: Teardown processing complete. ContainerID="1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499" Feb 9 18:35:01.935005 env[1440]: time="2024-02-09T18:35:01.934941868Z" level=info msg="TearDown network for sandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\" successfully" Feb 9 18:35:01.960708 env[1440]: time="2024-02-09T18:35:01.960663565Z" level=info msg="RemovePodSandbox \"1d6a5297692d50ee5bc438e514a07da8ae4dca15ac00134c04d26378064e6499\" returns successfully" Feb 9 18:35:01.961315 env[1440]: time="2024-02-09T18:35:01.961283726Z" level=info msg="StopPodSandbox for \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\"" Feb 9 18:35:01.972110 systemd-networkd[1604]: calid0049a96aa7: Gained IPv6LL Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.004 [WARNING][5337] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d", Pod:"csi-node-driver-7hwhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali571278ad65b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.005 [INFO][5337] k8s.go 578: Cleaning up netns ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.005 [INFO][5337] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" iface="eth0" netns="" Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.005 [INFO][5337] k8s.go 585: Releasing IP address(es) ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.005 [INFO][5337] utils.go 188: Calico CNI releasing IP address ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.024 [INFO][5344] ipam_plugin.go 415: Releasing address using handleID ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.024 [INFO][5344] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.025 [INFO][5344] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.037 [WARNING][5344] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.037 [INFO][5344] ipam_plugin.go 443: Releasing address using workloadID ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.038 [INFO][5344] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:02.043164 env[1440]: 2024-02-09 18:35:02.040 [INFO][5337] k8s.go 591: Teardown processing complete. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.043164 env[1440]: time="2024-02-09T18:35:02.041851818Z" level=info msg="TearDown network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\" successfully" Feb 9 18:35:02.043164 env[1440]: time="2024-02-09T18:35:02.041880818Z" level=info msg="StopPodSandbox for \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\" returns successfully" Feb 9 18:35:02.043859 env[1440]: time="2024-02-09T18:35:02.043830220Z" level=info msg="RemovePodSandbox for \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\"" Feb 9 18:35:02.044127 env[1440]: time="2024-02-09T18:35:02.044076540Z" level=info msg="Forcibly stopping sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\"" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.111 [WARNING][5362] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"844edbd0-8ef3-4fe8-912a-b7cf2c34e24c", ResourceVersion:"893", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"c9d4f4a6418b0c68ba10944f4d2279ae71cd47ea583c89cdd425a848bd913b2d", Pod:"csi-node-driver-7hwhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali571278ad65b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.111 [INFO][5362] k8s.go 578: Cleaning up netns ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.111 [INFO][5362] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" iface="eth0" netns="" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.112 [INFO][5362] k8s.go 585: Releasing IP address(es) ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.112 [INFO][5362] utils.go 188: Calico CNI releasing IP address ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.130 [INFO][5368] ipam_plugin.go 415: Releasing address using handleID ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.130 [INFO][5368] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.130 [INFO][5368] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.141 [WARNING][5368] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.141 [INFO][5368] ipam_plugin.go 443: Releasing address using workloadID ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" HandleID="k8s-pod-network.96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Workload="ci--3510.3.2--a--e8e52debc2-k8s-csi--node--driver--7hwhv-eth0" Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.143 [INFO][5368] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:02.146077 env[1440]: 2024-02-09 18:35:02.144 [INFO][5362] k8s.go 591: Teardown processing complete. ContainerID="96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb" Feb 9 18:35:02.146530 env[1440]: time="2024-02-09T18:35:02.146107646Z" level=info msg="TearDown network for sandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\" successfully" Feb 9 18:35:02.167263 env[1440]: time="2024-02-09T18:35:02.167218860Z" level=info msg="RemovePodSandbox \"96050a4fff3fb6e71df6b4e9f21d2364e30a911aad9194e676e90eec067adebb\" returns successfully" Feb 9 18:35:02.167732 env[1440]: time="2024-02-09T18:35:02.167701740Z" level=info msg="StopPodSandbox for \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\"" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.212 [WARNING][5387] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0", GenerateName:"calico-kube-controllers-6496d9846c-", Namespace:"calico-system", SelfLink:"", UID:"41326e56-a6a2-4833-a012-a189fa6577b6", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d9846c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0", Pod:"calico-kube-controllers-6496d9846c-w2h2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6eb5fd23ccb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.218 [INFO][5387] k8s.go 578: Cleaning up netns ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.218 [INFO][5387] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" iface="eth0" netns="" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.218 [INFO][5387] k8s.go 585: Releasing IP address(es) ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.218 [INFO][5387] utils.go 188: Calico CNI releasing IP address ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.241 [INFO][5393] ipam_plugin.go 415: Releasing address using handleID ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.242 [INFO][5393] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.242 [INFO][5393] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.251 [WARNING][5393] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.251 [INFO][5393] ipam_plugin.go 443: Releasing address using workloadID ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.253 [INFO][5393] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:02.255665 env[1440]: 2024-02-09 18:35:02.254 [INFO][5387] k8s.go 591: Teardown processing complete. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.256140 env[1440]: time="2024-02-09T18:35:02.255707518Z" level=info msg="TearDown network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\" successfully" Feb 9 18:35:02.256140 env[1440]: time="2024-02-09T18:35:02.255739078Z" level=info msg="StopPodSandbox for \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\" returns successfully" Feb 9 18:35:02.256251 env[1440]: time="2024-02-09T18:35:02.256212798Z" level=info msg="RemovePodSandbox for \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\"" Feb 9 18:35:02.256375 env[1440]: time="2024-02-09T18:35:02.256311918Z" level=info msg="Forcibly stopping sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\"" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.317 [WARNING][5412] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0", GenerateName:"calico-kube-controllers-6496d9846c-", Namespace:"calico-system", SelfLink:"", UID:"41326e56-a6a2-4833-a012-a189fa6577b6", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6496d9846c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"47cbfb1aea9c4bd698ec95b2ee821eb8e1c6abce526bacd5250c7b7094636da0", Pod:"calico-kube-controllers-6496d9846c-w2h2q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6eb5fd23ccb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.317 [INFO][5412] k8s.go 578: Cleaning up netns ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.317 [INFO][5412] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" iface="eth0" netns="" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.317 [INFO][5412] k8s.go 585: Releasing IP address(es) ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.317 [INFO][5412] utils.go 188: Calico CNI releasing IP address ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.349 [INFO][5418] ipam_plugin.go 415: Releasing address using handleID ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.349 [INFO][5418] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.349 [INFO][5418] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.359 [WARNING][5418] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.359 [INFO][5418] ipam_plugin.go 443: Releasing address using workloadID ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" HandleID="k8s-pod-network.38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Workload="ci--3510.3.2--a--e8e52debc2-k8s-calico--kube--controllers--6496d9846c--w2h2q-eth0" Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.360 [INFO][5418] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:02.363103 env[1440]: 2024-02-09 18:35:02.361 [INFO][5412] k8s.go 591: Teardown processing complete. ContainerID="38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3" Feb 9 18:35:02.363552 env[1440]: time="2024-02-09T18:35:02.363131907Z" level=info msg="TearDown network for sandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\" successfully" Feb 9 18:35:02.387701 env[1440]: time="2024-02-09T18:35:02.387644243Z" level=info msg="RemovePodSandbox \"38706056fe32c30fea49d7cf5db0d9f8cc3c285b05dff582b0f414ab8a9eb5f3\" returns successfully" Feb 9 18:35:02.388564 env[1440]: time="2024-02-09T18:35:02.388530204Z" level=info msg="StopPodSandbox for \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\"" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.444 [WARNING][5438] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"9c5b7cae-5788-46fc-a067-48ba7ee335bb", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd", Pod:"coredns-787d4945fb-8fllh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7199eaee57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.444 [INFO][5438] k8s.go 578: Cleaning up netns ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.444 [INFO][5438] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" iface="eth0" netns="" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.444 [INFO][5438] k8s.go 585: Releasing IP address(es) ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.444 [INFO][5438] utils.go 188: Calico CNI releasing IP address ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.471 [INFO][5445] ipam_plugin.go 415: Releasing address using handleID ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.471 [INFO][5445] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.471 [INFO][5445] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.483 [WARNING][5445] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.483 [INFO][5445] ipam_plugin.go 443: Releasing address using workloadID ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.484 [INFO][5445] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:02.487820 env[1440]: 2024-02-09 18:35:02.486 [INFO][5438] k8s.go 591: Teardown processing complete. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.488269 env[1440]: time="2024-02-09T18:35:02.487839269Z" level=info msg="TearDown network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\" successfully" Feb 9 18:35:02.488269 env[1440]: time="2024-02-09T18:35:02.487868829Z" level=info msg="StopPodSandbox for \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\" returns successfully" Feb 9 18:35:02.488755 env[1440]: time="2024-02-09T18:35:02.488726789Z" level=info msg="RemovePodSandbox for \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\"" Feb 9 18:35:02.489011 env[1440]: time="2024-02-09T18:35:02.488920589Z" level=info msg="Forcibly stopping sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\"" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.537 [WARNING][5463] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"9c5b7cae-5788-46fc-a067-48ba7ee335bb", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2024, time.February, 9, 18, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.2-a-e8e52debc2", ContainerID:"e8877dd75c17e26c5b0d2b27e5e2e0c775bf1b5756fc1213f760b4d588757dfd", Pod:"coredns-787d4945fb-8fllh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7199eaee57", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.542 [INFO][5463] k8s.go 578: Cleaning up netns ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.542 [INFO][5463] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" iface="eth0" netns="" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.542 [INFO][5463] k8s.go 585: Releasing IP address(es) ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.542 [INFO][5463] utils.go 188: Calico CNI releasing IP address ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.562 [INFO][5469] ipam_plugin.go 415: Releasing address using handleID ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.562 [INFO][5469] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.563 [INFO][5469] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.572 [WARNING][5469] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.572 [INFO][5469] ipam_plugin.go 443: Releasing address using workloadID ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" HandleID="k8s-pod-network.663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Workload="ci--3510.3.2--a--e8e52debc2-k8s-coredns--787d4945fb--8fllh-eth0" Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.574 [INFO][5469] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 9 18:35:02.581922 env[1440]: 2024-02-09 18:35:02.579 [INFO][5463] k8s.go 591: Teardown processing complete. ContainerID="663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac" Feb 9 18:35:02.582504 env[1440]: time="2024-02-09T18:35:02.582464370Z" level=info msg="TearDown network for sandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\" successfully" Feb 9 18:35:02.676179 systemd-networkd[1604]: cali39d9fd5f0b8: Gained IPv6LL Feb 9 18:35:02.710818 env[1440]: time="2024-02-09T18:35:02.710775654Z" level=info msg="RemovePodSandbox \"663ebda85219c50fa899699642de98855df4e6c40e8c34ec41e73a2e0aa4dbac\" returns successfully" Feb 9 18:35:02.711450 env[1440]: time="2024-02-09T18:35:02.711426614Z" level=info msg="StopPodSandbox for \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\"" Feb 9 18:35:02.711640 env[1440]: time="2024-02-09T18:35:02.711591294Z" level=info msg="TearDown network for sandbox \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" successfully" Feb 9 18:35:02.711713 env[1440]: time="2024-02-09T18:35:02.711695974Z" level=info msg="StopPodSandbox for \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" returns successfully" Feb 9 18:35:02.712084 env[1440]: time="2024-02-09T18:35:02.712061334Z" level=info msg="RemovePodSandbox for \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\"" Feb 9 18:35:02.712208 env[1440]: time="2024-02-09T18:35:02.712175974Z" level=info msg="Forcibly stopping sandbox \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\"" Feb 9 18:35:02.712318 env[1440]: time="2024-02-09T18:35:02.712301455Z" level=info msg="TearDown network for sandbox \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" successfully" Feb 9 18:35:02.735318 env[1440]: time="2024-02-09T18:35:02.735202269Z" level=info msg="RemovePodSandbox \"a7cc5971c5d7c392cad09b817820ebc1008fc839081f958dd20ed27f0f548065\" returns successfully" Feb 9 18:35:05.402428 env[1440]: time="2024-02-09T18:35:05.402380016Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.411822 env[1440]: time="2024-02-09T18:35:05.411777622Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.419106 env[1440]: time="2024-02-09T18:35:05.419065547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.428451 env[1440]: time="2024-02-09T18:35:05.428408513Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.429116 env[1440]: time="2024-02-09T18:35:05.429083873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 18:35:05.430731 env[1440]: time="2024-02-09T18:35:05.430695234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 9 18:35:05.431627 env[1440]: time="2024-02-09T18:35:05.431584835Z" level=info msg="CreateContainer within sandbox \"39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 18:35:05.485536 env[1440]: time="2024-02-09T18:35:05.485472389Z" level=info msg="CreateContainer within sandbox \"39602b567048a9161dfe4dc1593ce9836953f0a6fd5229da7539f7c25674cb43\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7\"" Feb 9 18:35:05.486255 env[1440]: time="2024-02-09T18:35:05.486225829Z" level=info msg="StartContainer for \"094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7\"" Feb 9 18:35:05.511712 systemd[1]: run-containerd-runc-k8s.io-094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7-runc.ys9WtY.mount: Deactivated successfully. Feb 9 18:35:05.566908 env[1440]: time="2024-02-09T18:35:05.566864480Z" level=info msg="StartContainer for \"094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7\" returns successfully" Feb 9 18:35:05.855233 env[1440]: time="2024-02-09T18:35:05.855189622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.869226 env[1440]: time="2024-02-09T18:35:05.869177990Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.878663 env[1440]: time="2024-02-09T18:35:05.878612916Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.887427 env[1440]: time="2024-02-09T18:35:05.887381202Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 18:35:05.888349 env[1440]: time="2024-02-09T18:35:05.888323243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 9 18:35:05.893002 env[1440]: time="2024-02-09T18:35:05.892934965Z" level=info msg="CreateContainer within sandbox \"cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 9 18:35:05.945836 env[1440]: time="2024-02-09T18:35:05.945791319Z" level=info msg="CreateContainer within sandbox \"cc7ccf422fff6d52c80f115b696f7781735d4ce96c643c785a7ee2df9194c48b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a3a97c5e7b73fca496b074b647997a0cc50d810862bb90147befe728fdff636f\"" Feb 9 18:35:05.948452 env[1440]: time="2024-02-09T18:35:05.948423400Z" level=info msg="StartContainer for \"a3a97c5e7b73fca496b074b647997a0cc50d810862bb90147befe728fdff636f\"" Feb 9 18:35:06.046741 env[1440]: time="2024-02-09T18:35:06.046693022Z" level=info msg="StartContainer for \"a3a97c5e7b73fca496b074b647997a0cc50d810862bb90147befe728fdff636f\" returns successfully" Feb 9 18:35:06.111773 kubelet[2612]: I0209 18:35:06.111660 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-db6d8b798-gcspv" podStartSLOduration=-9.223372028743155e+09 pod.CreationTimestamp="2024-02-09 18:34:58 +0000 UTC" firstStartedPulling="2024-02-09 18:35:00.986377285 +0000 UTC m=+59.437974221" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:06.098969375 +0000 UTC m=+64.550566271" watchObservedRunningTime="2024-02-09 18:35:06.111620703 +0000 UTC m=+64.563217639" Feb 9 18:35:06.253000 audit[5574]: NETFILTER_CFG table=filter:139 family=2 entries=8 op=nft_register_rule pid=5574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.260096 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 9 18:35:06.260201 kernel: audit: type=1325 audit(1707503706.253:332): table=filter:139 family=2 entries=8 op=nft_register_rule pid=5574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.253000 audit[5574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffdb78cb70 a2=0 a3=ffffa276a6c0 items=0 ppid=2815 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:06.321979 kernel: audit: type=1300 audit(1707503706.253:332): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffdb78cb70 a2=0 a3=ffffa276a6c0 items=0 ppid=2815 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:06.253000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:06.372540 kernel: audit: type=1327 audit(1707503706.253:332): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:06.257000 audit[5574]: NETFILTER_CFG table=nat:140 family=2 entries=78 op=nft_register_rule pid=5574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.394670 kernel: audit: type=1325 audit(1707503706.257:333): table=nat:140 family=2 entries=78 op=nft_register_rule pid=5574 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.257000 audit[5574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffdb78cb70 a2=0 a3=ffffa276a6c0 items=0 ppid=2815 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:06.428309 kernel: audit: type=1300 audit(1707503706.257:333): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffdb78cb70 a2=0 a3=ffffa276a6c0 items=0 ppid=2815 pid=5574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:06.257000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:06.447102 kernel: audit: type=1327 audit(1707503706.257:333): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:06.486000 audit[5600]: NETFILTER_CFG table=filter:141 family=2 entries=8 op=nft_register_rule pid=5600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.486000 audit[5600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc099a1c0 a2=0 a3=ffffb09236c0 items=0 ppid=2815 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:06.549524 kernel: audit: type=1325 audit(1707503706.486:334): table=filter:141 family=2 entries=8 op=nft_register_rule pid=5600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.549647 kernel: audit: type=1300 audit(1707503706.486:334): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffc099a1c0 a2=0 a3=ffffb09236c0 items=0 ppid=2815 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:06.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:06.567288 kernel: audit: type=1327 audit(1707503706.486:334): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:06.506000 audit[5600]: NETFILTER_CFG table=nat:142 family=2 entries=78 op=nft_register_rule pid=5600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.594602 kernel: audit: type=1325 audit(1707503706.506:335): table=nat:142 family=2 entries=78 op=nft_register_rule pid=5600 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:06.506000 audit[5600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffc099a1c0 a2=0 a3=ffffb09236c0 items=0 ppid=2815 pid=5600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:06.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:07.453807 systemd[1]: run-containerd-runc-k8s.io-9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08-runc.YeBSws.mount: Deactivated successfully. Feb 9 18:35:17.216801 systemd[1]: run-containerd-runc-k8s.io-b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc-runc.y2vnEe.mount: Deactivated successfully. Feb 9 18:35:30.470211 systemd[1]: run-containerd-runc-k8s.io-094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7-runc.Li86Rd.mount: Deactivated successfully. Feb 9 18:35:30.518101 kubelet[2612]: I0209 18:35:30.513504 2612 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-db6d8b798-xmvt5" podStartSLOduration=-9.223372004341309e+09 pod.CreationTimestamp="2024-02-09 18:34:58 +0000 UTC" firstStartedPulling="2024-02-09 18:35:01.168489564 +0000 UTC m=+59.620086500" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 18:35:06.112773023 +0000 UTC m=+64.564369959" watchObservedRunningTime="2024-02-09 18:35:30.513467994 +0000 UTC m=+88.965064930" Feb 9 18:35:30.585000 audit[5729]: NETFILTER_CFG table=filter:143 family=2 entries=7 op=nft_register_rule pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.592524 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 18:35:30.592649 kernel: audit: type=1325 audit(1707503730.585:336): table=filter:143 family=2 entries=7 op=nft_register_rule pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.585000 audit[5729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffef42e750 a2=0 a3=ffff81ec86c0 items=0 ppid=2815 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.642049 kernel: audit: type=1300 audit(1707503730.585:336): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffef42e750 a2=0 a3=ffff81ec86c0 items=0 ppid=2815 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.585000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:30.659104 kernel: audit: type=1327 audit(1707503730.585:336): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:30.590000 audit[5729]: NETFILTER_CFG table=nat:144 family=2 entries=85 op=nft_register_chain pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.675421 kernel: audit: type=1325 audit(1707503730.590:337): table=nat:144 family=2 entries=85 op=nft_register_chain pid=5729 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.590000 audit[5729]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28484 a0=3 a1=ffffef42e750 a2=0 a3=ffff81ec86c0 items=0 ppid=2815 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.708698 kernel: audit: type=1300 audit(1707503730.590:337): arch=c00000b7 syscall=211 success=yes exit=28484 a0=3 a1=ffffef42e750 a2=0 a3=ffff81ec86c0 items=0 ppid=2815 pid=5729 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.590000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:30.724586 kernel: audit: type=1327 audit(1707503730.590:337): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:30.724696 kernel: audit: type=1325 audit(1707503730.675:338): table=filter:145 family=2 entries=6 op=nft_register_rule pid=5755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.675000 audit[5755]: NETFILTER_CFG table=filter:145 family=2 entries=6 op=nft_register_rule pid=5755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.675000 audit[5755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd33648b0 a2=0 a3=ffff93e956c0 items=0 ppid=2815 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.773442 kernel: audit: type=1300 audit(1707503730.675:338): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd33648b0 a2=0 a3=ffff93e956c0 items=0 ppid=2815 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:30.789600 kernel: audit: type=1327 audit(1707503730.675:338): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:30.679000 audit[5755]: NETFILTER_CFG table=nat:146 family=2 entries=92 op=nft_register_chain pid=5755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.806413 kernel: audit: type=1325 audit(1707503730.679:339): table=nat:146 family=2 entries=92 op=nft_register_chain pid=5755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:35:30.679000 audit[5755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffd33648b0 a2=0 a3=ffff93e956c0 items=0 ppid=2815 pid=5755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:35:30.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:35:37.454708 systemd[1]: run-containerd-runc-k8s.io-9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08-runc.0glZDT.mount: Deactivated successfully. Feb 9 18:35:47.212985 systemd[1]: run-containerd-runc-k8s.io-b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc-runc.iDWLSj.mount: Deactivated successfully. Feb 9 18:35:55.187147 systemd[1]: run-containerd-runc-k8s.io-9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08-runc.Dr7jHx.mount: Deactivated successfully. Feb 9 18:36:00.470771 systemd[1]: run-containerd-runc-k8s.io-094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7-runc.DhAbXq.mount: Deactivated successfully. Feb 9 18:36:01.463461 systemd[1]: run-containerd-runc-k8s.io-a3a97c5e7b73fca496b074b647997a0cc50d810862bb90147befe728fdff636f-runc.G9Ny4o.mount: Deactivated successfully. Feb 9 18:36:07.454122 systemd[1]: run-containerd-runc-k8s.io-9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08-runc.9Q5ov0.mount: Deactivated successfully. Feb 9 18:36:42.036969 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.12.6:53518.service. Feb 9 18:36:42.046406 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 18:36:42.046505 kernel: audit: type=1130 audit(1707503802.036:340): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.40:22-10.200.12.6:53518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:42.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.40:22-10.200.12.6:53518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:42.489985 sshd[5997]: Accepted publickey for core from 10.200.12.6 port 53518 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:42.487000 audit[5997]: USER_ACCT pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:42.516660 sshd[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:42.489000 audit[5997]: CRED_ACQ pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:42.543436 kernel: audit: type=1101 audit(1707503802.487:341): pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:42.543556 kernel: audit: type=1103 audit(1707503802.489:342): pid=5997 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:42.560318 kernel: audit: type=1006 audit(1707503802.489:343): pid=5997 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 9 18:36:42.489000 audit[5997]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffaed3f90 a2=3 a3=1 items=0 ppid=1 pid=5997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.588974 kernel: audit: type=1300 audit(1707503802.489:343): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffaed3f90 a2=3 a3=1 items=0 ppid=1 pid=5997 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:42.489000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:42.598941 kernel: audit: type=1327 audit(1707503802.489:343): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:42.600257 systemd[1]: Started session-10.scope. Feb 9 18:36:42.600482 systemd-logind[1421]: New session 10 of user core. Feb 9 18:36:42.613000 audit[5997]: USER_START pid=5997 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:42.621000 audit[6001]: CRED_ACQ pid=6001 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:42.669471 kernel: audit: type=1105 audit(1707503802.613:344): pid=5997 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:42.669594 kernel: audit: type=1103 audit(1707503802.621:345): pid=6001 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:43.002646 sshd[5997]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:43.002000 audit[5997]: USER_END pid=5997 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:43.005807 systemd-logind[1421]: Session 10 logged out. Waiting for processes to exit. Feb 9 18:36:43.007107 systemd[1]: sshd@7-10.200.20.40:22-10.200.12.6:53518.service: Deactivated successfully. Feb 9 18:36:43.007977 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 18:36:43.009311 systemd-logind[1421]: Removed session 10. Feb 9 18:36:43.047997 kernel: audit: type=1106 audit(1707503803.002:346): pid=5997 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:43.002000 audit[5997]: CRED_DISP pid=5997 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:43.085976 kernel: audit: type=1104 audit(1707503803.002:347): pid=5997 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:43.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.200.20.40:22-10.200.12.6:53518 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:48.075833 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.12.6:43084.service. Feb 9 18:36:48.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.40:22-10.200.12.6:43084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:48.081796 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:48.081874 kernel: audit: type=1130 audit(1707503808.075:349): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.40:22-10.200.12.6:43084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:48.551000 audit[6038]: USER_ACCT pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.552324 sshd[6038]: Accepted publickey for core from 10.200.12.6 port 43084 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:48.552000 audit[6038]: CRED_ACQ pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.579276 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:48.604572 kernel: audit: type=1101 audit(1707503808.551:350): pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.604707 kernel: audit: type=1103 audit(1707503808.552:351): pid=6038 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.608815 systemd[1]: Started session-11.scope. Feb 9 18:36:48.610031 systemd-logind[1421]: New session 11 of user core. Feb 9 18:36:48.625528 kernel: audit: type=1006 audit(1707503808.552:352): pid=6038 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 9 18:36:48.552000 audit[6038]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff4d4500 a2=3 a3=1 items=0 ppid=1 pid=6038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:48.654159 kernel: audit: type=1300 audit(1707503808.552:352): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff4d4500 a2=3 a3=1 items=0 ppid=1 pid=6038 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:48.552000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:48.663422 kernel: audit: type=1327 audit(1707503808.552:352): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:48.663530 kernel: audit: type=1105 audit(1707503808.615:353): pid=6038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.615000 audit[6038]: USER_START pid=6038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.622000 audit[6041]: CRED_ACQ pid=6041 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.717386 kernel: audit: type=1103 audit(1707503808.622:354): pid=6041 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.939168 sshd[6038]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:48.939000 audit[6038]: USER_END pid=6038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.943605 systemd-logind[1421]: Session 11 logged out. Waiting for processes to exit. Feb 9 18:36:48.944928 systemd[1]: sshd@8-10.200.20.40:22-10.200.12.6:43084.service: Deactivated successfully. Feb 9 18:36:48.945870 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 18:36:48.947195 systemd-logind[1421]: Removed session 11. Feb 9 18:36:48.941000 audit[6038]: CRED_DISP pid=6038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.995318 kernel: audit: type=1106 audit(1707503808.939:355): pid=6038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.995458 kernel: audit: type=1104 audit(1707503808.941:356): pid=6038 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:48.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.200.20.40:22-10.200.12.6:43084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:54.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.40:22-10.200.12.6:43092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:54.007159 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.12.6:43092.service. Feb 9 18:36:54.012923 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:54.013004 kernel: audit: type=1130 audit(1707503814.005:358): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.40:22-10.200.12.6:43092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:54.437000 audit[6053]: USER_ACCT pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.439451 sshd[6053]: Accepted publickey for core from 10.200.12.6 port 43092 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:36:54.466623 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:36:54.466983 kernel: audit: type=1101 audit(1707503814.437:359): pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.467040 kernel: audit: type=1103 audit(1707503814.464:360): pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.464000 audit[6053]: CRED_ACQ pid=6053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.507101 kernel: audit: type=1006 audit(1707503814.464:361): pid=6053 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 9 18:36:54.464000 audit[6053]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffca35f450 a2=3 a3=1 items=0 ppid=1 pid=6053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:54.533453 kernel: audit: type=1300 audit(1707503814.464:361): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffca35f450 a2=3 a3=1 items=0 ppid=1 pid=6053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:36:54.464000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:54.536405 systemd[1]: Started session-12.scope. Feb 9 18:36:54.542609 kernel: audit: type=1327 audit(1707503814.464:361): proctitle=737368643A20636F7265205B707269765D Feb 9 18:36:54.542596 systemd-logind[1421]: New session 12 of user core. Feb 9 18:36:54.544000 audit[6053]: USER_START pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.546000 audit[6056]: CRED_ACQ pid=6056 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.600976 kernel: audit: type=1105 audit(1707503814.544:362): pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.601081 kernel: audit: type=1103 audit(1707503814.546:363): pid=6056 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.844145 sshd[6053]: pam_unix(sshd:session): session closed for user core Feb 9 18:36:54.843000 audit[6053]: USER_END pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.846761 systemd-logind[1421]: Session 12 logged out. Waiting for processes to exit. Feb 9 18:36:54.848095 systemd[1]: sshd@9-10.200.20.40:22-10.200.12.6:43092.service: Deactivated successfully. Feb 9 18:36:54.848930 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 18:36:54.850357 systemd-logind[1421]: Removed session 12. Feb 9 18:36:54.843000 audit[6053]: CRED_DISP pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.900763 kernel: audit: type=1106 audit(1707503814.843:364): pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.900861 kernel: audit: type=1104 audit(1707503814.843:365): pid=6053 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:36:54.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.200.20.40:22-10.200.12.6:43092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:59.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.40:22-10.200.12.6:47516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:36:59.912915 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.12.6:47516.service. Feb 9 18:36:59.918506 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:36:59.918605 kernel: audit: type=1130 audit(1707503819.912:367): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.40:22-10.200.12.6:47516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:00.362000 audit[6086]: USER_ACCT pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.363438 sshd[6086]: Accepted publickey for core from 10.200.12.6 port 47516 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:00.390655 sshd[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:00.391029 kernel: audit: type=1101 audit(1707503820.362:368): pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.391081 kernel: audit: type=1103 audit(1707503820.389:369): pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.389000 audit[6086]: CRED_ACQ pid=6086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.396232 systemd[1]: Started session-13.scope. Feb 9 18:37:00.397304 systemd-logind[1421]: New session 13 of user core. Feb 9 18:37:00.432488 kernel: audit: type=1006 audit(1707503820.389:370): pid=6086 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Feb 9 18:37:00.432587 kernel: audit: type=1300 audit(1707503820.389:370): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3535e30 a2=3 a3=1 items=0 ppid=1 pid=6086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:00.389000 audit[6086]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3535e30 a2=3 a3=1 items=0 ppid=1 pid=6086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:00.389000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:00.468152 kernel: audit: type=1327 audit(1707503820.389:370): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:00.400000 audit[6086]: USER_START pid=6086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.496271 kernel: audit: type=1105 audit(1707503820.400:371): pid=6086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.416000 audit[6089]: CRED_ACQ pid=6089 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.523062 kernel: audit: type=1103 audit(1707503820.416:372): pid=6089 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.525864 systemd[1]: run-containerd-runc-k8s.io-094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7-runc.wbwtk7.mount: Deactivated successfully. Feb 9 18:37:00.756279 sshd[6086]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:00.756000 audit[6086]: USER_END pid=6086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.759551 systemd-logind[1421]: Session 13 logged out. Waiting for processes to exit. Feb 9 18:37:00.760847 systemd[1]: sshd@10-10.200.20.40:22-10.200.12.6:47516.service: Deactivated successfully. Feb 9 18:37:00.761701 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 18:37:00.763002 systemd-logind[1421]: Removed session 13. Feb 9 18:37:00.756000 audit[6086]: CRED_DISP pid=6086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.810050 kernel: audit: type=1106 audit(1707503820.756:373): pid=6086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.810227 kernel: audit: type=1104 audit(1707503820.756:374): pid=6086 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:00.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.200.20.40:22-10.200.12.6:47516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:05.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.40:22-10.200.12.6:47524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:05.824651 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.12.6:47524.service. Feb 9 18:37:05.832564 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:37:05.832649 kernel: audit: type=1130 audit(1707503825.823:376): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.40:22-10.200.12.6:47524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:06.241000 audit[6142]: USER_ACCT pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.242634 sshd[6142]: Accepted publickey for core from 10.200.12.6 port 47524 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:06.268000 audit[6142]: CRED_ACQ pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.269089 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:06.294931 kernel: audit: type=1101 audit(1707503826.241:377): pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.295046 kernel: audit: type=1103 audit(1707503826.268:378): pid=6142 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.311281 kernel: audit: type=1006 audit(1707503826.268:379): pid=6142 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Feb 9 18:37:06.268000 audit[6142]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb26e660 a2=3 a3=1 items=0 ppid=1 pid=6142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:06.339794 kernel: audit: type=1300 audit(1707503826.268:379): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffb26e660 a2=3 a3=1 items=0 ppid=1 pid=6142 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:06.315453 systemd[1]: Started session-14.scope. Feb 9 18:37:06.315629 systemd-logind[1421]: New session 14 of user core. Feb 9 18:37:06.268000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:06.349459 kernel: audit: type=1327 audit(1707503826.268:379): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:06.319000 audit[6142]: USER_START pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.327000 audit[6145]: CRED_ACQ pid=6145 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.402012 kernel: audit: type=1105 audit(1707503826.319:380): pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.402089 kernel: audit: type=1103 audit(1707503826.327:381): pid=6145 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.619775 sshd[6142]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:06.619000 audit[6142]: USER_END pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.623622 systemd[1]: sshd@11-10.200.20.40:22-10.200.12.6:47524.service: Deactivated successfully. Feb 9 18:37:06.624481 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 18:37:06.621000 audit[6142]: CRED_DISP pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.652989 systemd-logind[1421]: Session 14 logged out. Waiting for processes to exit. Feb 9 18:37:06.653972 systemd-logind[1421]: Removed session 14. Feb 9 18:37:06.675229 kernel: audit: type=1106 audit(1707503826.619:382): pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.675318 kernel: audit: type=1104 audit(1707503826.621:383): pid=6142 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:06.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.200.20.40:22-10.200.12.6:47524 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:06.693351 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.12.6:47538.service. Feb 9 18:37:06.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.40:22-10.200.12.6:47538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:07.140000 audit[6156]: USER_ACCT pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:07.141645 sshd[6156]: Accepted publickey for core from 10.200.12.6 port 47538 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:07.141000 audit[6156]: CRED_ACQ pid=6156 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:07.141000 audit[6156]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5fb1970 a2=3 a3=1 items=0 ppid=1 pid=6156 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:07.141000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:07.143179 sshd[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:07.147516 systemd[1]: Started session-15.scope. Feb 9 18:37:07.147911 systemd-logind[1421]: New session 15 of user core. Feb 9 18:37:07.152000 audit[6156]: USER_START pid=6156 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:07.153000 audit[6159]: CRED_ACQ pid=6159 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:07.473132 systemd[1]: run-containerd-runc-k8s.io-9e02f091848697336859ce4ca82962d0e106b5f6ad12288db5c8e09808142a08-runc.knhfXJ.mount: Deactivated successfully. Feb 9 18:37:08.576722 sshd[6156]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:08.577000 audit[6156]: USER_END pid=6156 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:08.577000 audit[6156]: CRED_DISP pid=6156 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:08.580042 systemd[1]: sshd@12-10.200.20.40:22-10.200.12.6:47538.service: Deactivated successfully. Feb 9 18:37:08.580871 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 18:37:08.581361 systemd-logind[1421]: Session 15 logged out. Waiting for processes to exit. Feb 9 18:37:08.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.200.20.40:22-10.200.12.6:47538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:08.582099 systemd-logind[1421]: Removed session 15. Feb 9 18:37:08.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.40:22-10.200.12.6:54452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:08.652927 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.12.6:54452.service. Feb 9 18:37:09.069000 audit[6186]: USER_ACCT pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:09.071123 sshd[6186]: Accepted publickey for core from 10.200.12.6 port 54452 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:09.071000 audit[6186]: CRED_ACQ pid=6186 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:09.071000 audit[6186]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff3c4c590 a2=3 a3=1 items=0 ppid=1 pid=6186 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:09.071000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:09.072732 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:09.076836 systemd-logind[1421]: New session 16 of user core. Feb 9 18:37:09.077361 systemd[1]: Started session-16.scope. Feb 9 18:37:09.081000 audit[6186]: USER_START pid=6186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:09.082000 audit[6189]: CRED_ACQ pid=6189 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:09.438893 sshd[6186]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:09.438000 audit[6186]: USER_END pid=6186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:09.438000 audit[6186]: CRED_DISP pid=6186 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:09.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.200.20.40:22-10.200.12.6:54452 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:09.441292 systemd-logind[1421]: Session 16 logged out. Waiting for processes to exit. Feb 9 18:37:09.441454 systemd[1]: sshd@13-10.200.20.40:22-10.200.12.6:54452.service: Deactivated successfully. Feb 9 18:37:09.442386 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 18:37:09.442857 systemd-logind[1421]: Removed session 16. Feb 9 18:37:14.523506 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 9 18:37:14.523642 kernel: audit: type=1130 audit(1707503834.512:403): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.40:22-10.200.12.6:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:14.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.40:22-10.200.12.6:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:14.512989 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.12.6:54460.service. Feb 9 18:37:14.963000 audit[6206]: USER_ACCT pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:14.965040 sshd[6206]: Accepted publickey for core from 10.200.12.6 port 54460 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:14.966815 sshd[6206]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:14.971585 systemd[1]: Started session-17.scope. Feb 9 18:37:14.972862 systemd-logind[1421]: New session 17 of user core. Feb 9 18:37:14.965000 audit[6206]: CRED_ACQ pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.017704 kernel: audit: type=1101 audit(1707503834.963:404): pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.017830 kernel: audit: type=1103 audit(1707503834.965:405): pid=6206 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.017866 kernel: audit: type=1006 audit(1707503834.965:406): pid=6206 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 9 18:37:14.965000 audit[6206]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc5198610 a2=3 a3=1 items=0 ppid=1 pid=6206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:15.063217 kernel: audit: type=1300 audit(1707503834.965:406): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc5198610 a2=3 a3=1 items=0 ppid=1 pid=6206 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:15.063344 kernel: audit: type=1327 audit(1707503834.965:406): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:14.965000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:14.976000 audit[6206]: USER_START pid=6206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.101885 kernel: audit: type=1105 audit(1707503834.976:407): pid=6206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:14.977000 audit[6208]: CRED_ACQ pid=6208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.125695 kernel: audit: type=1103 audit(1707503834.977:408): pid=6208 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.348667 sshd[6206]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:15.348000 audit[6206]: USER_END pid=6206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.352012 systemd-logind[1421]: Session 17 logged out. Waiting for processes to exit. Feb 9 18:37:15.353512 systemd[1]: sshd@14-10.200.20.40:22-10.200.12.6:54460.service: Deactivated successfully. Feb 9 18:37:15.354401 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 18:37:15.355863 systemd-logind[1421]: Removed session 17. Feb 9 18:37:15.349000 audit[6206]: CRED_DISP pid=6206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.406117 kernel: audit: type=1106 audit(1707503835.348:409): pid=6206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.406214 kernel: audit: type=1104 audit(1707503835.349:410): pid=6206 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.200.20.40:22-10.200.12.6:54460 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:15.416844 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.12.6:54468.service. Feb 9 18:37:15.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.40:22-10.200.12.6:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:15.839000 audit[6218]: USER_ACCT pid=6218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.840213 sshd[6218]: Accepted publickey for core from 10.200.12.6 port 54468 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:15.840000 audit[6218]: CRED_ACQ pid=6218 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.840000 audit[6218]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffaa5f6c0 a2=3 a3=1 items=0 ppid=1 pid=6218 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:15.840000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:15.842195 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:15.846767 systemd[1]: Started session-18.scope. Feb 9 18:37:15.847203 systemd-logind[1421]: New session 18 of user core. Feb 9 18:37:15.864000 audit[6218]: USER_START pid=6218 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:15.866000 audit[6221]: CRED_ACQ pid=6221 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:16.314467 sshd[6218]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:16.314000 audit[6218]: USER_END pid=6218 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:16.314000 audit[6218]: CRED_DISP pid=6218 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:16.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.200.20.40:22-10.200.12.6:54468 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:16.317581 systemd[1]: sshd@15-10.200.20.40:22-10.200.12.6:54468.service: Deactivated successfully. Feb 9 18:37:16.318897 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 18:37:16.319378 systemd-logind[1421]: Session 18 logged out. Waiting for processes to exit. Feb 9 18:37:16.320351 systemd-logind[1421]: Removed session 18. Feb 9 18:37:16.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.40:22-10.200.12.6:54478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:16.388241 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.12.6:54478.service. Feb 9 18:37:16.837000 audit[6229]: USER_ACCT pid=6229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:16.838965 sshd[6229]: Accepted publickey for core from 10.200.12.6 port 54478 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:16.839000 audit[6229]: CRED_ACQ pid=6229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:16.839000 audit[6229]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc1e99200 a2=3 a3=1 items=0 ppid=1 pid=6229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:16.839000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:16.840539 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:16.844901 systemd[1]: Started session-19.scope. Feb 9 18:37:16.845413 systemd-logind[1421]: New session 19 of user core. Feb 9 18:37:16.851000 audit[6229]: USER_START pid=6229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:16.852000 audit[6234]: CRED_ACQ pid=6234 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:17.927000 audit[6292]: NETFILTER_CFG table=filter:147 family=2 entries=18 op=nft_register_rule pid=6292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:17.927000 audit[6292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffe0a3a740 a2=0 a3=ffff81ba56c0 items=0 ppid=2815 pid=6292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:17.927000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:17.929000 audit[6292]: NETFILTER_CFG table=nat:148 family=2 entries=94 op=nft_register_rule pid=6292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:17.929000 audit[6292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=ffffe0a3a740 a2=0 a3=ffff81ba56c0 items=0 ppid=2815 pid=6292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:17.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:17.942827 sshd[6229]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:17.943000 audit[6229]: USER_END pid=6229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:17.943000 audit[6229]: CRED_DISP pid=6229 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:17.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.200.20.40:22-10.200.12.6:54478 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:17.946429 systemd[1]: sshd@16-10.200.20.40:22-10.200.12.6:54478.service: Deactivated successfully. Feb 9 18:37:17.948101 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 18:37:17.948579 systemd-logind[1421]: Session 19 logged out. Waiting for processes to exit. Feb 9 18:37:17.949445 systemd-logind[1421]: Removed session 19. Feb 9 18:37:17.976000 audit[6320]: NETFILTER_CFG table=filter:149 family=2 entries=30 op=nft_register_rule pid=6320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:17.976000 audit[6320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=fffffad556a0 a2=0 a3=ffffbd4e36c0 items=0 ppid=2815 pid=6320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:17.976000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:17.978000 audit[6320]: NETFILTER_CFG table=nat:150 family=2 entries=94 op=nft_register_rule pid=6320 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:17.978000 audit[6320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=30372 a0=3 a1=fffffad556a0 a2=0 a3=ffffbd4e36c0 items=0 ppid=2815 pid=6320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:17.978000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:18.014683 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.12.6:52082.service. Feb 9 18:37:18.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.40:22-10.200.12.6:52082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:18.436000 audit[6321]: USER_ACCT pid=6321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:18.437582 sshd[6321]: Accepted publickey for core from 10.200.12.6 port 52082 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:18.437000 audit[6321]: CRED_ACQ pid=6321 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:18.437000 audit[6321]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2cec670 a2=3 a3=1 items=0 ppid=1 pid=6321 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:18.437000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:18.439176 sshd[6321]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:18.443508 systemd-logind[1421]: New session 20 of user core. Feb 9 18:37:18.443600 systemd[1]: Started session-20.scope. Feb 9 18:37:18.446000 audit[6321]: USER_START pid=6321 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:18.448000 audit[6324]: CRED_ACQ pid=6324 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:18.919285 sshd[6321]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:18.919000 audit[6321]: USER_END pid=6321 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:18.919000 audit[6321]: CRED_DISP pid=6321 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:18.921840 systemd[1]: sshd@17-10.200.20.40:22-10.200.12.6:52082.service: Deactivated successfully. Feb 9 18:37:18.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.200.20.40:22-10.200.12.6:52082 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:18.922920 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 18:37:18.922977 systemd-logind[1421]: Session 20 logged out. Waiting for processes to exit. Feb 9 18:37:18.924041 systemd-logind[1421]: Removed session 20. Feb 9 18:37:18.991567 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.12.6:52090.service. Feb 9 18:37:18.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.40:22-10.200.12.6:52090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:19.439000 audit[6332]: USER_ACCT pid=6332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.440270 sshd[6332]: Accepted publickey for core from 10.200.12.6 port 52090 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:19.440000 audit[6332]: CRED_ACQ pid=6332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.440000 audit[6332]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffc26cdd0 a2=3 a3=1 items=0 ppid=1 pid=6332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:19.440000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:19.443065 sshd[6332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:19.446893 systemd-logind[1421]: New session 21 of user core. Feb 9 18:37:19.447372 systemd[1]: Started session-21.scope. Feb 9 18:37:19.452000 audit[6332]: USER_START pid=6332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.453000 audit[6335]: CRED_ACQ pid=6335 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.816672 sshd[6332]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:19.817000 audit[6332]: USER_END pid=6332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.824482 kernel: kauditd_printk_skb: 54 callbacks suppressed Feb 9 18:37:19.824576 kernel: audit: type=1106 audit(1707503839.817:449): pid=6332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.825652 systemd[1]: sshd@18-10.200.20.40:22-10.200.12.6:52090.service: Deactivated successfully. Feb 9 18:37:19.826523 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 18:37:19.827997 systemd-logind[1421]: Session 21 logged out. Waiting for processes to exit. Feb 9 18:37:19.828989 systemd-logind[1421]: Removed session 21. Feb 9 18:37:19.817000 audit[6332]: CRED_DISP pid=6332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.876548 kernel: audit: type=1104 audit(1707503839.817:450): pid=6332 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:19.877856 kernel: audit: type=1131 audit(1707503839.824:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.40:22-10.200.12.6:52090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:19.824000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.200.20.40:22-10.200.12.6:52090 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.174000 audit[6370]: NETFILTER_CFG table=filter:151 family=2 entries=18 op=nft_register_rule pid=6370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:24.174000 audit[6370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe0ba5990 a2=0 a3=ffff957f46c0 items=0 ppid=2815 pid=6370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:24.225858 kernel: audit: type=1325 audit(1707503844.174:452): table=filter:151 family=2 entries=18 op=nft_register_rule pid=6370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:24.226037 kernel: audit: type=1300 audit(1707503844.174:452): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe0ba5990 a2=0 a3=ffff957f46c0 items=0 ppid=2815 pid=6370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:24.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:24.241387 kernel: audit: type=1327 audit(1707503844.174:452): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:24.174000 audit[6370]: NETFILTER_CFG table=nat:152 family=2 entries=178 op=nft_register_chain pid=6370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:24.257528 kernel: audit: type=1325 audit(1707503844.174:453): table=nat:152 family=2 entries=178 op=nft_register_chain pid=6370 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 9 18:37:24.174000 audit[6370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=72324 a0=3 a1=ffffe0ba5990 a2=0 a3=ffff957f46c0 items=0 ppid=2815 pid=6370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:24.290532 kernel: audit: type=1300 audit(1707503844.174:453): arch=c00000b7 syscall=211 success=yes exit=72324 a0=3 a1=ffffe0ba5990 a2=0 a3=ffff957f46c0 items=0 ppid=2815 pid=6370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:24.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:24.305943 kernel: audit: type=1327 audit(1707503844.174:453): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 9 18:37:24.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.40:22-10.200.12.6:52106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:24.894277 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.12.6:52106.service. Feb 9 18:37:24.919996 kernel: audit: type=1130 audit(1707503844.893:454): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.40:22-10.200.12.6:52106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:25.343000 audit[6373]: USER_ACCT pid=6373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.424045 kernel: audit: type=1101 audit(1707503845.343:455): pid=6373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.424118 kernel: audit: type=1103 audit(1707503845.346:456): pid=6373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.424150 kernel: audit: type=1006 audit(1707503845.346:457): pid=6373 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Feb 9 18:37:25.424179 kernel: audit: type=1300 audit(1707503845.346:457): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5e44880 a2=3 a3=1 items=0 ppid=1 pid=6373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:25.346000 audit[6373]: CRED_ACQ pid=6373 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.346000 audit[6373]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5e44880 a2=3 a3=1 items=0 ppid=1 pid=6373 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:25.424386 sshd[6373]: Accepted publickey for core from 10.200.12.6 port 52106 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:25.347463 sshd[6373]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:25.346000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:25.451308 systemd[1]: Started session-22.scope. Feb 9 18:37:25.457310 kernel: audit: type=1327 audit(1707503845.346:457): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:25.457322 systemd-logind[1421]: New session 22 of user core. Feb 9 18:37:25.461000 audit[6373]: USER_START pid=6373 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.492983 kernel: audit: type=1105 audit(1707503845.461:458): pid=6373 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.492000 audit[6377]: CRED_ACQ pid=6377 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.519981 kernel: audit: type=1103 audit(1707503845.492:459): pid=6377 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.805169 sshd[6373]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:25.805000 audit[6373]: USER_END pid=6373 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.838032 systemd[1]: sshd@19-10.200.20.40:22-10.200.12.6:52106.service: Deactivated successfully. Feb 9 18:37:25.838877 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 18:37:25.806000 audit[6373]: CRED_DISP pid=6373 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.864609 kernel: audit: type=1106 audit(1707503845.805:460): pid=6373 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.864723 kernel: audit: type=1104 audit(1707503845.806:461): pid=6373 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:25.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.200.20.40:22-10.200.12.6:52106 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:25.865016 systemd-logind[1421]: Session 22 logged out. Waiting for processes to exit. Feb 9 18:37:25.865701 systemd-logind[1421]: Removed session 22. Feb 9 18:37:30.470057 systemd[1]: run-containerd-runc-k8s.io-094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7-runc.w8CJMc.mount: Deactivated successfully. Feb 9 18:37:30.501597 systemd[1]: run-containerd-runc-k8s.io-a3a97c5e7b73fca496b074b647997a0cc50d810862bb90147befe728fdff636f-runc.xJcr4I.mount: Deactivated successfully. Feb 9 18:37:30.876168 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.12.6:42626.service. Feb 9 18:37:30.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.40:22-10.200.12.6:42626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:30.882267 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:37:30.882357 kernel: audit: type=1130 audit(1707503850.875:463): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.40:22-10.200.12.6:42626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:31.294000 audit[6433]: USER_ACCT pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.295984 sshd[6433]: Accepted publickey for core from 10.200.12.6 port 42626 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:31.297905 sshd[6433]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:31.296000 audit[6433]: CRED_ACQ pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.332703 systemd[1]: Started session-23.scope. Feb 9 18:37:31.333995 systemd-logind[1421]: New session 23 of user core. Feb 9 18:37:31.347579 kernel: audit: type=1101 audit(1707503851.294:464): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.347692 kernel: audit: type=1103 audit(1707503851.296:465): pid=6433 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.366539 kernel: audit: type=1006 audit(1707503851.296:466): pid=6433 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Feb 9 18:37:31.296000 audit[6433]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff393920 a2=3 a3=1 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:31.393692 kernel: audit: type=1300 audit(1707503851.296:466): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff393920 a2=3 a3=1 items=0 ppid=1 pid=6433 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:31.296000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:31.408977 kernel: audit: type=1327 audit(1707503851.296:466): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:31.409102 kernel: audit: type=1105 audit(1707503851.347:467): pid=6433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.347000 audit[6433]: USER_START pid=6433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.348000 audit[6439]: CRED_ACQ pid=6439 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.457662 kernel: audit: type=1103 audit(1707503851.348:468): pid=6439 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.654091 sshd[6433]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:31.654000 audit[6433]: USER_END pid=6433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.654000 audit[6433]: CRED_DISP pid=6433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.686147 systemd[1]: sshd@20-10.200.20.40:22-10.200.12.6:42626.service: Deactivated successfully. Feb 9 18:37:31.710910 kernel: audit: type=1106 audit(1707503851.654:469): pid=6433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.711039 kernel: audit: type=1104 audit(1707503851.654:470): pid=6433 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:31.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.200.20.40:22-10.200.12.6:42626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:31.711491 systemd-logind[1421]: Session 23 logged out. Waiting for processes to exit. Feb 9 18:37:31.711596 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 18:37:31.712724 systemd-logind[1421]: Removed session 23. Feb 9 18:37:36.729096 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.12.6:42628.service. Feb 9 18:37:36.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.40:22-10.200.12.6:42628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:36.735851 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:37:36.735943 kernel: audit: type=1130 audit(1707503856.728:472): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.40:22-10.200.12.6:42628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:37.178000 audit[6450]: USER_ACCT pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.179450 sshd[6450]: Accepted publickey for core from 10.200.12.6 port 42628 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:37.181174 sshd[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:37.179000 audit[6450]: CRED_ACQ pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.231454 kernel: audit: type=1101 audit(1707503857.178:473): pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.231572 kernel: audit: type=1103 audit(1707503857.179:474): pid=6450 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.247216 kernel: audit: type=1006 audit(1707503857.179:475): pid=6450 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Feb 9 18:37:37.179000 audit[6450]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7415fd0 a2=3 a3=1 items=0 ppid=1 pid=6450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:37.256467 systemd[1]: Started session-24.scope. Feb 9 18:37:37.256996 systemd-logind[1421]: New session 24 of user core. Feb 9 18:37:37.273864 kernel: audit: type=1300 audit(1707503857.179:475): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7415fd0 a2=3 a3=1 items=0 ppid=1 pid=6450 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:37.179000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:37.284483 kernel: audit: type=1327 audit(1707503857.179:475): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:37.274000 audit[6450]: USER_START pid=6450 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.313091 kernel: audit: type=1105 audit(1707503857.274:476): pid=6450 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.274000 audit[6453]: CRED_ACQ pid=6453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.336344 kernel: audit: type=1103 audit(1707503857.274:477): pid=6453 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.590914 sshd[6450]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:37.590000 audit[6450]: USER_END pid=6450 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.598854 systemd[1]: sshd@21-10.200.20.40:22-10.200.12.6:42628.service: Deactivated successfully. Feb 9 18:37:37.599781 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 18:37:37.596000 audit[6450]: CRED_DISP pid=6450 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.620993 systemd-logind[1421]: Session 24 logged out. Waiting for processes to exit. Feb 9 18:37:37.644255 kernel: audit: type=1106 audit(1707503857.590:478): pid=6450 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.644375 kernel: audit: type=1104 audit(1707503857.596:479): pid=6450 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:37.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.200.20.40:22-10.200.12.6:42628 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:37.645345 systemd-logind[1421]: Removed session 24. Feb 9 18:37:42.662142 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.12.6:55624.service. Feb 9 18:37:42.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.40:22-10.200.12.6:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:42.671628 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:37:42.671706 kernel: audit: type=1130 audit(1707503862.661:481): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.40:22-10.200.12.6:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:43.112000 audit[6485]: USER_ACCT pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.114677 sshd[6485]: Accepted publickey for core from 10.200.12.6 port 55624 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:43.140000 audit[6485]: CRED_ACQ pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.141097 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:43.165518 kernel: audit: type=1101 audit(1707503863.112:482): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.165621 kernel: audit: type=1103 audit(1707503863.140:483): pid=6485 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.171022 systemd-logind[1421]: New session 25 of user core. Feb 9 18:37:43.171715 systemd[1]: Started session-25.scope. Feb 9 18:37:43.182525 kernel: audit: type=1006 audit(1707503863.140:484): pid=6485 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 9 18:37:43.140000 audit[6485]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe87f2b20 a2=3 a3=1 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:43.211083 kernel: audit: type=1300 audit(1707503863.140:484): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe87f2b20 a2=3 a3=1 items=0 ppid=1 pid=6485 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:43.140000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:43.220411 kernel: audit: type=1327 audit(1707503863.140:484): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:43.182000 audit[6485]: USER_START pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.249245 kernel: audit: type=1105 audit(1707503863.182:485): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.210000 audit[6488]: CRED_ACQ pid=6488 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.273080 kernel: audit: type=1103 audit(1707503863.210:486): pid=6488 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.525188 sshd[6485]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:43.525000 audit[6485]: USER_END pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.529565 systemd[1]: sshd@22-10.200.20.40:22-10.200.12.6:55624.service: Deactivated successfully. Feb 9 18:37:43.530517 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 18:37:43.527000 audit[6485]: CRED_DISP pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.581446 kernel: audit: type=1106 audit(1707503863.525:487): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.581574 kernel: audit: type=1104 audit(1707503863.527:488): pid=6485 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:43.581884 systemd-logind[1421]: Session 25 logged out. Waiting for processes to exit. Feb 9 18:37:43.528000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.200.20.40:22-10.200.12.6:55624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:43.582856 systemd-logind[1421]: Removed session 25. Feb 9 18:37:47.217855 systemd[1]: run-containerd-runc-k8s.io-b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc-runc.UzLU8n.mount: Deactivated successfully. Feb 9 18:37:48.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.40:22-10.200.12.6:48826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:48.599174 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.12.6:48826.service. Feb 9 18:37:48.606015 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:37:48.606092 kernel: audit: type=1130 audit(1707503868.598:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.40:22-10.200.12.6:48826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:49.049000 audit[6524]: USER_ACCT pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.050900 sshd[6524]: Accepted publickey for core from 10.200.12.6 port 48826 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:49.052265 sshd[6524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:49.050000 audit[6524]: CRED_ACQ pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.124561 kernel: audit: type=1101 audit(1707503869.049:491): pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.124691 kernel: audit: type=1103 audit(1707503869.050:492): pid=6524 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.144813 kernel: audit: type=1006 audit(1707503869.050:493): pid=6524 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 9 18:37:49.128707 systemd[1]: Started session-26.scope. Feb 9 18:37:49.145220 systemd-logind[1421]: New session 26 of user core. Feb 9 18:37:49.050000 audit[6524]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd15259b0 a2=3 a3=1 items=0 ppid=1 pid=6524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:49.179915 kernel: audit: type=1300 audit(1707503869.050:493): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd15259b0 a2=3 a3=1 items=0 ppid=1 pid=6524 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:49.050000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:49.194697 kernel: audit: type=1327 audit(1707503869.050:493): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:49.194000 audit[6524]: USER_START pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.232000 audit[6531]: CRED_ACQ pid=6531 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.264267 kernel: audit: type=1105 audit(1707503869.194:494): pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.264374 kernel: audit: type=1103 audit(1707503869.232:495): pid=6531 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.546005 sshd[6524]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:49.546000 audit[6524]: USER_END pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.549596 systemd-logind[1421]: Session 26 logged out. Waiting for processes to exit. Feb 9 18:37:49.551000 systemd[1]: sshd@23-10.200.20.40:22-10.200.12.6:48826.service: Deactivated successfully. Feb 9 18:37:49.551918 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 18:37:49.553721 systemd-logind[1421]: Removed session 26. Feb 9 18:37:49.546000 audit[6524]: CRED_DISP pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.611593 kernel: audit: type=1106 audit(1707503869.546:496): pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.611774 kernel: audit: type=1104 audit(1707503869.546:497): pid=6524 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:49.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.200.20.40:22-10.200.12.6:48826 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:54.615233 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.12.6:48828.service. Feb 9 18:37:54.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.40:22-10.200.12.6:48828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:54.646889 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 9 18:37:54.647032 kernel: audit: type=1130 audit(1707503874.615:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.40:22-10.200.12.6:48828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:55.036000 audit[6542]: USER_ACCT pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.038024 sshd[6542]: Accepted publickey for core from 10.200.12.6 port 48828 ssh2: RSA SHA256:AExcTof2ms2RC04cAfR/26ykZOGA1PeppPBnNP0o6qE Feb 9 18:37:55.066001 kernel: audit: type=1101 audit(1707503875.036:500): pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.065000 audit[6542]: CRED_ACQ pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.066806 sshd[6542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 18:37:55.106926 kernel: audit: type=1103 audit(1707503875.065:501): pid=6542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.107081 kernel: audit: type=1006 audit(1707503875.065:502): pid=6542 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 9 18:37:55.065000 audit[6542]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7e78df0 a2=3 a3=1 items=0 ppid=1 pid=6542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:55.135294 kernel: audit: type=1300 audit(1707503875.065:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7e78df0 a2=3 a3=1 items=0 ppid=1 pid=6542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 18:37:55.065000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:55.139326 systemd-logind[1421]: New session 27 of user core. Feb 9 18:37:55.139898 systemd[1]: Started session-27.scope. Feb 9 18:37:55.145982 kernel: audit: type=1327 audit(1707503875.065:502): proctitle=737368643A20636F7265205B707269765D Feb 9 18:37:55.145000 audit[6542]: USER_START pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.176000 audit[6545]: CRED_ACQ pid=6545 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.202206 kernel: audit: type=1105 audit(1707503875.145:503): pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.202299 kernel: audit: type=1103 audit(1707503875.176:504): pid=6545 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.477463 sshd[6542]: pam_unix(sshd:session): session closed for user core Feb 9 18:37:55.477000 audit[6542]: USER_END pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.478000 audit[6542]: CRED_DISP pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.510289 systemd[1]: sshd@24-10.200.20.40:22-10.200.12.6:48828.service: Deactivated successfully. Feb 9 18:37:55.511140 systemd[1]: session-27.scope: Deactivated successfully. Feb 9 18:37:55.534348 kernel: audit: type=1106 audit(1707503875.477:505): pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.534479 kernel: audit: type=1104 audit(1707503875.478:506): pid=6542 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.200.12.6 addr=10.200.12.6 terminal=ssh res=success' Feb 9 18:37:55.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-10.200.20.40:22-10.200.12.6:48828 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 18:37:55.535027 systemd-logind[1421]: Session 27 logged out. Waiting for processes to exit. Feb 9 18:37:55.536214 systemd-logind[1421]: Removed session 27. Feb 9 18:38:00.510820 systemd[1]: run-containerd-runc-k8s.io-a3a97c5e7b73fca496b074b647997a0cc50d810862bb90147befe728fdff636f-runc.FsGzWo.mount: Deactivated successfully. Feb 9 18:38:17.215341 systemd[1]: run-containerd-runc-k8s.io-b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc-runc.Flg3bn.mount: Deactivated successfully. Feb 9 18:38:30.472493 systemd[1]: run-containerd-runc-k8s.io-094c97e0a9cbd9cf9bf76c43d24ff32b5b1ee2db180e1587f5550bc6af26a5c7-runc.cL0RRf.mount: Deactivated successfully. Feb 9 18:38:39.974678 env[1440]: time="2024-02-09T18:38:39.972311952Z" level=info msg="shim disconnected" id=f9ffe3798034d790073b6a3ed7ab79bc73fecab0fe17d2417674df4a185a3670 Feb 9 18:38:39.974678 env[1440]: time="2024-02-09T18:38:39.972386272Z" level=warning msg="cleaning up after shim disconnected" id=f9ffe3798034d790073b6a3ed7ab79bc73fecab0fe17d2417674df4a185a3670 namespace=k8s.io Feb 9 18:38:39.974678 env[1440]: time="2024-02-09T18:38:39.972400472Z" level=info msg="cleaning up dead shim" Feb 9 18:38:39.973675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9ffe3798034d790073b6a3ed7ab79bc73fecab0fe17d2417674df4a185a3670-rootfs.mount: Deactivated successfully. Feb 9 18:38:39.980134 env[1440]: time="2024-02-09T18:38:39.980091731Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6754 runtime=io.containerd.runc.v2\n" Feb 9 18:38:40.620997 kubelet[2612]: I0209 18:38:40.620860 2612 scope.go:115] "RemoveContainer" containerID="f9ffe3798034d790073b6a3ed7ab79bc73fecab0fe17d2417674df4a185a3670" Feb 9 18:38:40.624244 env[1440]: time="2024-02-09T18:38:40.624203953Z" level=info msg="CreateContainer within sandbox \"bfb5b78887b0ae385b318fa7001b089ee924254fa726a2cd7f0645c090c301da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 18:38:40.656398 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2325611066.mount: Deactivated successfully. Feb 9 18:38:40.664124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990432762.mount: Deactivated successfully. Feb 9 18:38:40.679025 env[1440]: time="2024-02-09T18:38:40.678988005Z" level=info msg="CreateContainer within sandbox \"bfb5b78887b0ae385b318fa7001b089ee924254fa726a2cd7f0645c090c301da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a934f7bf91ab7a893b3bbdc906e14f4f8ecd858f622bd5bcd9f610b5f9853fc3\"" Feb 9 18:38:40.679552 env[1440]: time="2024-02-09T18:38:40.679529326Z" level=info msg="StartContainer for \"a934f7bf91ab7a893b3bbdc906e14f4f8ecd858f622bd5bcd9f610b5f9853fc3\"" Feb 9 18:38:40.732248 env[1440]: time="2024-02-09T18:38:40.732171252Z" level=info msg="StartContainer for \"a934f7bf91ab7a893b3bbdc906e14f4f8ecd858f622bd5bcd9f610b5f9853fc3\" returns successfully" Feb 9 18:38:41.086323 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94258dacf8af5d3b5270710a725728833444d6c2208b0fb4d27b9ad2f5892cd2-rootfs.mount: Deactivated successfully. Feb 9 18:38:41.088441 env[1440]: time="2024-02-09T18:38:41.088401464Z" level=info msg="shim disconnected" id=94258dacf8af5d3b5270710a725728833444d6c2208b0fb4d27b9ad2f5892cd2 Feb 9 18:38:41.088704 env[1440]: time="2024-02-09T18:38:41.088491704Z" level=warning msg="cleaning up after shim disconnected" id=94258dacf8af5d3b5270710a725728833444d6c2208b0fb4d27b9ad2f5892cd2 namespace=k8s.io Feb 9 18:38:41.088704 env[1440]: time="2024-02-09T18:38:41.088504384Z" level=info msg="cleaning up dead shim" Feb 9 18:38:41.095702 env[1440]: time="2024-02-09T18:38:41.095667722Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6811 runtime=io.containerd.runc.v2\n" Feb 9 18:38:41.625636 kubelet[2612]: I0209 18:38:41.625604 2612 scope.go:115] "RemoveContainer" containerID="94258dacf8af5d3b5270710a725728833444d6c2208b0fb4d27b9ad2f5892cd2" Feb 9 18:38:41.627329 env[1440]: time="2024-02-09T18:38:41.627289389Z" level=info msg="CreateContainer within sandbox \"92d7a1de7bd26d7a4e5873f380f49ac043eb31cae8bdced838d3bc79b7ea9356\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 9 18:38:41.667263 env[1440]: time="2024-02-09T18:38:41.667222165Z" level=info msg="CreateContainer within sandbox \"92d7a1de7bd26d7a4e5873f380f49ac043eb31cae8bdced838d3bc79b7ea9356\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"b44ee1cbabdc2abac15d13eb6e4473b63bf4f14c60a1aab269bf60c45240e496\"" Feb 9 18:38:41.667636 env[1440]: time="2024-02-09T18:38:41.667613766Z" level=info msg="StartContainer for \"b44ee1cbabdc2abac15d13eb6e4473b63bf4f14c60a1aab269bf60c45240e496\"" Feb 9 18:38:41.724228 env[1440]: time="2024-02-09T18:38:41.724178060Z" level=info msg="StartContainer for \"b44ee1cbabdc2abac15d13eb6e4473b63bf4f14c60a1aab269bf60c45240e496\" returns successfully" Feb 9 18:38:43.987720 kubelet[2612]: E0209 18:38:43.987689 2612 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.40:55372->10.200.20.26:2379: read: connection timed out Feb 9 18:38:44.022757 env[1440]: time="2024-02-09T18:38:44.017625260Z" level=info msg="shim disconnected" id=0257a321038c388d85761b06afdc30fa272b3d0849b941d1e24b44c760728836 Feb 9 18:38:44.022757 env[1440]: time="2024-02-09T18:38:44.017671580Z" level=warning msg="cleaning up after shim disconnected" id=0257a321038c388d85761b06afdc30fa272b3d0849b941d1e24b44c760728836 namespace=k8s.io Feb 9 18:38:44.022757 env[1440]: time="2024-02-09T18:38:44.017738100Z" level=info msg="cleaning up dead shim" Feb 9 18:38:44.022106 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0257a321038c388d85761b06afdc30fa272b3d0849b941d1e24b44c760728836-rootfs.mount: Deactivated successfully. Feb 9 18:38:44.025910 env[1440]: time="2024-02-09T18:38:44.025862439Z" level=warning msg="cleanup warnings time=\"2024-02-09T18:38:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6875 runtime=io.containerd.runc.v2\n" Feb 9 18:38:44.633844 kubelet[2612]: I0209 18:38:44.633815 2612 scope.go:115] "RemoveContainer" containerID="0257a321038c388d85761b06afdc30fa272b3d0849b941d1e24b44c760728836" Feb 9 18:38:44.635925 env[1440]: time="2024-02-09T18:38:44.635889396Z" level=info msg="CreateContainer within sandbox \"ad2f844e1289d5210e4e73d31844fb057704909e0c3fc0b2b3ad252dae29c5d5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 18:38:44.669605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4120783536.mount: Deactivated successfully. Feb 9 18:38:44.683721 env[1440]: time="2024-02-09T18:38:44.683678909Z" level=info msg="CreateContainer within sandbox \"ad2f844e1289d5210e4e73d31844fb057704909e0c3fc0b2b3ad252dae29c5d5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"8e01cc5481829869900b8b0997371869c1c62d263db11ff29190a7673e6cc599\"" Feb 9 18:38:44.684156 env[1440]: time="2024-02-09T18:38:44.684134190Z" level=info msg="StartContainer for \"8e01cc5481829869900b8b0997371869c1c62d263db11ff29190a7673e6cc599\"" Feb 9 18:38:44.737998 env[1440]: time="2024-02-09T18:38:44.737914636Z" level=info msg="StartContainer for \"8e01cc5481829869900b8b0997371869c1c62d263db11ff29190a7673e6cc599\" returns successfully" Feb 9 18:38:47.212088 systemd[1]: run-containerd-runc-k8s.io-b208505535824eb4bc4a8e39872ca936d313bf404252635de360dfb9975abcfc-runc.mXH71p.mount: Deactivated successfully. Feb 9 18:38:50.196601 kubelet[2612]: E0209 18:38:50.196153 2612 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-e8e52debc2.17b245c36f0c8dd5", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-e8e52debc2", UID:"1015204b68d3a78c3a1c07a1735ea5b4", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-e8e52debc2"}, FirstTimestamp:time.Date(2024, time.February, 9, 18, 38, 34, 410872277, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 18, 38, 34, 410872277, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.20.40:55176->10.200.20.26:2379: read: connection timed out' (will not retry!) Feb 9 18:38:51.360989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.377564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.394603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.411578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.428073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.445056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.445327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.463920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.464113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.482672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.482877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.501504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.501763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.529471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.529696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.529800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.548169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.548368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.566614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.566829 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.584936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.585160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.603320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.603497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.623213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.623466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.641631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.641876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.659965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.660189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.687173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.687411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.687570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.705167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.705402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.723615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.723835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.741570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.741796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.759125 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.759332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.785618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.785825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.785937 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.803469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.803659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.820988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.821219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.838411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.838635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.847004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.864446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.864650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.881802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.882065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.899288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.899513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.916809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.917108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.933963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.934159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.951322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.951558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.968680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.968896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.986157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:51.986415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.003737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.003962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.021097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.021311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.038628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.038830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.056465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.056658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.074039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.074230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.091705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.091934 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.109120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.109327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.127194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.127414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.144818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.145065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.162399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.162594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.180113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.180304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.197914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.198147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.225032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.225277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.225388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.243176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.243373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.260851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.261087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.278368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.278554 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.295735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.295910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.313278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.313456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.330565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.330713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.347844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.348027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.373275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.373486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.374987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.392388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.392569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.409719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.409915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.427698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.427886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.445203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.445385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.462814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 18:38:52.463043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#9 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001